content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
How to Make Sweet Potato Vine Cuttings
Ornamental sweet potato vines (Ipomoea batatas) are grown for their striking foliage and trumpet-shaped flowers. They come in a variety of colors, including a variegated leaf cultivar called "Tricolor" and "Margarita," with chartreuse leaves. They grow year-round in U.S. Department of Agriculture plant hardiness zones 9 to 11 and as an annual elsewhere. Ornamental sweet potato vines must be grown from cuttings to faithfully reproduce the favorable characteristics of the cultivar. The cuttings will root quickly if taken in late spring after the weather warms, but you must take the cuttings from the tips of the vines to ensure success.
-
1.
Water the sweet potato vine to a 5-inch depth the night before making cuttings to ensure the stems and foliage are well hydrated. Run a garden hose at the base for five minutes rather than spraying from above to prevent bacteria growth on the stems.
-
2.
Wipe the blade of a utility knife or a pair of pruning shears with a cotton ball or clean paper towel soaked in rubbing alcohol. Let the blade air dry, then wipe it with a dry paper towel to remove any alcohol residue.
-
3.
Choose a healthy, vigorous stem on the sweet potato vine. Find one with healthy young foliage at the tip and a pliant stem. Avoid stems with obvious damage or signs of fungal infections, such as blackish residue or soft, weeping spots.
-
4.
Measure back 6 to 8 inches from the tip of the stem. Make a cut 1/8- to 1/4-inch below a set of leaves using the cleaned and sanitized utility knife or shears. Make the cut straight across, rather than angled.
-
5.
Place the sweet potato vine cutting in a warm, shady spot for 48 hours to harden, which will lessen the risk of transplant shock later on. Wrap the cutting in a barely moist paper towel. Mist the paper towel with water periodically so it never fully dries out.
-
6.
Fill a 6-inch pot filled with a mixture of equal parts perlite, coarse sand and sterile compost. Poke a planting hole in the center that is equal to half the length of the sweet potato vine cutting; for instance, a 6-inch-long cutting requires a 3-inch-deep hole.
-
7.
Remove the leaves from the bottom one-half of the sweet potato vine cutting. Insert the bottom end into the planting hole. Press the perlite mixture snugly against the stem to increase contact and eliminate any trapped air pockets.
-
8.
Place the potted sweet potato vine cutting outdoors under light shade or indoors on a warm, bright windowsill with filtered light. Maintain constant moisture in the soil but do not allow it to become soggy since the cutting will rot.
-
9.
Test for roots four to five weeks after potting the sweet potato vine cutting by gently pulling on the base of the stem. Feel if the vine cutting is firmly attached in the perlite mixture, which indicates that it has successfully rooted.
-
10.
Transplant the rooted sweet potato vine cutting into a permanent container or bed two weeks after rooting. Acclimate it to normal outdoor conditions for one week before transplanting to prevent heat and moisture stress.
References
Tips
- Dust the end of the cutting with hormone powder to hasten rooting, if desired.
Writer Bio
Samantha McMullen began writing professionally in 2001. Her nearly 20 years of experience in horticulture informs her work, which has appeared in publications such as Mother Earth News. | https://homeguides.sfgate.com/make-sweet-potato-vine-cuttings-49626.html |
The application of Fourier deconvolution to reaction time data: a cautionary note.
The Fourier transform method in conjunction with frequency domain smoothing techniques has been suggested as a powerful tool for examining components in a serial, additive reaction time model (P. L. Smith, 1990). Robustness and sensitivity to violations of the assumptions of serial model of this method are evaluated. When an incorrect distribution was used in recovering an unobserved component, results gave no information to show that an incorrect distribution was used, and the results were just as interpretable as those obtained using the correct distribution. These results demonstrate that the assumptions underlying the method cannot be assessed by the result of deconvolution, and the method cannot show that the purported component is actually from the serial combination.
| |
School safety trainings are needed to address the root causes of school violence, including high frequency problems such as bullying, Adverse Childhood Experiences (ACEs), and mental health crises, that threaten students’ safety. In 2019, 43% of North Carolina (NC) middle school youth reported bullying victimization, 22% were cyber bullied, 47% were in fights, 31% carried weapons, and 23% contemplated suicide (https://nccd.cdc.gov/youthonline). Rates have risen or remain unchanged over the past decade in NC. The percentage of students in NC reporting safety concerns at school has more than doubled from 5.6% in 2009 to 13.6% in 2019.
The Goal of the proposed project is to provide statewide, trauma-focused, evidenced-based training and consultation that addresses youth mental health issues, bullying, and victimization. Each year of the grant, we will provide training and consultation to schools across NC using the Youth Mental Health First Aid (YMHFA) program to increase awareness of, and early intervention in, student mental health crises. Our multi-disciplinary team will enhance YMHFA by integrating information on bullying, trauma, and ACEs in a supplementary training to deepen the focus on root causes of school violence. We will also add a second day of skills training simulations that allow participants to practice key skills. Instructors will facilitate the training online to break down geographic and transportation barriers. Sponsoring online training will increase our reach into rural areas and QOZs across NC with minimal additional funding needed. Deliverables include: 1) YMHFA training materials with supplementary training materials on bullying, ACEs, and trauma; 2) Skills practice role play materials; 3) # of participants served; 4) Satisfaction data; and 5) Financial and progress reports.
The North Carolina Youth Violence Prevention Center (NC-YVPC; www.nc-yvpc.org) is a not-for-profit agency that coordinates prevention programming in rural, underserved areas of NC. The team has more than a decade of experience implementing prevention programs with federal research funding. The proposed training will positively impact school safety across NC by providing innovative training experiences on youth mental health for school personnel and law enforcement. The knowledge and skills gained will guide new behaviors and lower the risk for future violence. Reducing root causes of youth violence, such as bullying, victimization, and trauma, will decrease the escalation to subsequent forms of violence later in the lifespan. The project will promote school safety in NC and enhance the YMHFA curriculum with new content and skills training material to disseminate across the US.
Similar Awards
- Expanding Law Enforcement Assisted Diversion Case Managers at Scarborough Police Department.To hire and implement the pivotal position of a Case Manager and support client costs.
- Expanding, Enhancing and Improving the Body-Worn Camera Program in the Colorado Springs Police Department
- Comprehensive Opioid, Stimulant, and Substance Abuse Site-Based Program (COSSAP) MCBHRS in partnership with MC Sheriffs Office, seek to expand and enhance SUD/MAT services in the jail. | https://bja.ojp.gov/funding/awards/15pbja-21-gg-04677-stop |
Bond is a debt instrument issued by the central/state government, PSUs and Corporate. Governments bonds are of three types. They include:
T-Bills – (matures in less than one year)
T-Notes-(mature in one to ten years)
T-Bonds- (matures in more than ten years)
Debt instruments represent a contract; whereby, one party lends money to another on pre-determined terms pertaining to the rate of interest, the period of such payments and the repayment of principal amounts borrowed.
Bonds and stocks are both securities, but the major difference between the two is that stockholders have an equity stake in the company (i.e. they are owners), whereas bondholders have a creditor stake in the company (i.e. they are lenders).
Another difference is that bonds usually have a defined term or maturity, after which the bond is redeemed whereas stocks may be outstanding indefinitely.
The most common process for issuing bonds is through underwriting. When a bond issue is underwritten, one or more securities firms or banks, forming a syndicate, buy the entire issue of bonds from the issuer and re-sell them to investors.
In contrast government bonds are usually issued in an auction. In some cases, both members of the public and banks may bid for bonds.
The overall rate of return on the bond depends on both the terms of the bond and the price paid. The terms of the bond, such as the coupon, are fixed in advance and the price is determined by the market.
The price of a bond is determined by the forces of demand and supply, as in the case of any other assets. The price of bond also depends on a number of other factors and will fluctuate according to changes in economic conditions, general money market conditions including the state of money supply in the economy, prevailing interest rate, future interest rate expectations and credit quality of issuers.
Features of Bonds:
Principal:
Par or face amount is the amount on which the issuer pays interest, and which most commonly, has to be repaid at the end of the term.
Maturity:
The issuer has to repay the principal amount on the maturity date.
Coupon:
It is the interest rate that the issuer pays to the bond holders. Usually this rate is fixed throughout the life of the bond. It can also vary with a money market index, such as LIBOR.
Yield:
It is the rate of return received from investing in the bond i.e., percentage of return on Principal amount, given coupon rate. Yield is also known as borrowing cost. It usually refers either to:
Current Yield which is simply the annual interest payment divided by the current market price of the bond. (Or)
Yield to Maturity which accounts the current market price as well as the amount and timing of all remaining coupon payments coupled with the repayment due on maturity.
The relationship between yield and term to maturity is called a yield curve. The yield curve is a graph plotting this relationship.
The yield and price of a bond are inversely related, meaning when market interest rates rise, bond prices fall and bond yield rises. Likewise, when market interest rate falls, bond price rises and bond yield falls.
Types of Bonds:
Fixed rate bonds have a coupon that remains constant throughout the life of the bond.
Floating rate bonds have a variable coupon that is linked to a reference rate of interest, such as LIBOR or EURIBOR.
Zero-coupon bonds pay no regular interest. They are issued at a substantial discount to par value, so that the interest is effectively rolled up to maturity. The bondholder receives the full principal amount on the redemption date.
High-yield bonds are bonds that are rated below investment grade by the credit rating agencies. As these bonds are more risky than investment grade bonds, investors expect to earn a higher yield.
Convertible bonds enable a bondholder to convert a bond to a number of shares of the issuer's common stock.
Inflation-indexed bond is an arrangement wherein the principal amount and the interest payments are indexed to inflation. The interest rate is normally lower than the fixed rate bonds with a comparable maturity.
Investing in Bonds are bought and traded mostly by financial institutions like, pension funds, insurance companies, hedge funds, and banks. Insurance companies and pension funds have liabilities which essentially include fixed amounts payable on predetermined dates. They buy the bonds to match their liabilities, and may be compelled by law to do this.
Price changes in a bond will affect the value of the bonds portfolio. The value of the portfolio also falls with fall in bond price. This can be damaging for professional investors such as banks, insurance companies, pension funds and asset managers.
Fixed rate bonds are subject to interest rate risk, meaning that their market prices will decrease in value when the generally prevailing interest rates rise. When the market interest rate rises, the market price of bonds will fall, reflecting investors' ability to get a higher interest rate on their money elsewhere.
Bonds are also subject to various other risks such as call and repayment risk, credit risk, reinvestment risk, liquidity risk, exchange rate risk, volatility risk, inflation risk, sovereign risk and yield curve risk. Again, some of these only affect certain classes of investors. | https://www.karvyonline.com/knowledge-center/advanced/Bond-Market |
April 16, 2021
Summary:
- Solid quarter for stocks overall, although there was a change in sector leadership with Technology lagging Energy and other sectors.
- Bonds saw on of their worst quarters historically.
- Interest rates began to rise along with equity markets reflecting the growing optimism of economic recovery, consumers being “unleashed” to spend and anticipated impact of the recent stimulus packages.
- Inflation becoming a concern as conditions are more ripe to precipitate an increase in inflation unlike the markets have seen in more than a decade.
- With Interest rates low, the impact of the rising U.S. debt balance has been muted. If rates turn higher, the impact of the $28 trillion in U.S. debt will be more significant.
- LWA has made strategic adjustments in many actively managed allocations based on this backdrop and will continue to monitor the overall situation.
- Infrastructure and Tax Packages, if passed could have broad impact on market performance and strategic financial planning.
The first quarter of 2021 saw solid gains for stocks driven by optimism around COVID trend improvements and prospects for economic reopening. Bonds, on the other hand, delivered one of the worst quarterly returns in history as the economic enthusiasm drove rates up from near-zero levels. This theme also translated into very different sector performance than the market has been used to as the almost forgotten energy sector rose about 30% in the quarter while the stalwart tech sector lagged with a gain of “only” 2%. In the bond market, there was much more action than may have been apparent. The rate on the 10-year U.S. Treasury rose 0.8% during the quarter, which is not huge on an absolute basis, but when measured against a base rate of less than 1%, the impact on bond values was substantial. Generally, our client allocations have avoided long-maturity bonds precisely for this risk, though by example, TLT, a popular ETF that tracks the U.S. 20-year Treasury, had a total return loss of 14% in the quarter.
The rise in stock prices and interest rates reflects rapidly growing optimism in the strength of economic recovery expected to unfold as more are vaccinated, virus numbers ebb, and businesses reopen. Evident in the chart below is how economists have been almost unable to keep up with the process of factoring in various growth drivers. Right now, the current estimate for U.S. economic expansion in 2021 is somewhere north of 6%, a level not experienced in decades. Of course, to be fair, most of this is rebound growth off the COVID contraction and it will take some time to see where employment and growth can settle out, but it is not likely to stay in the six percent range. Meanwhile, enjoy it while we have it.
In the next couple of months, a combination of activity restarts and comparisons to depressed data last year is likely to result in many eye-popping growth statistics. The recently released jobs data for March showed over 900 thousand jobs created versus the consensus forecast of 660 thousand. The unemployment rate was 6%, down from its peak last year of over 14%. The strong headline numbers were boosted by reopenings that brought back a surge of restaurant, travel, and leisure employees. The total employment base remains below the pre-COVID levels, but the gap is expected to narrow further over the next several months. Completely closing that gap and then surpassing previous employment levels will probably stretch out over a much longer period as the economy adjusts to more permanent structural changes brought on by COVID. Also, data indicates the pace of rehiring still faces challenges in attracting people that have effectively had higher income by not working as a product of supplemental unemployment benefits and repeated stimulus distributions.
A range of other economic data is also showing the swell in demand brought on by increased business and social mobility. Higher savings reserves, recent stimulus payments, the motivation to get out in the springtime weather all are working to create a rush in activity. Regional surveys of business activity, travel, and occupancy statistics, as well as data on orders, inventory, and shipping, all point to a strong pickup. Collectively, data from the Institute for Supply Management (ISM) provides a broad read on current business conditions and outlook. The most recent reading of 65 came in well above the 50 level that is considered the dividing line between expansion and contraction. Rarely do large economies experience such a whipsaw from one extreme to another, but then again COIVD has proven record-setting on many levels.
The concentrated increase in economic activity concurrent with the ongoing stimulus is bringing with it higher prices for many goods and services. Inflation has been a passing concern only intermittently over the past decade, but sustainable increases were rarely reflected in the data. This time around, the threat of trending inflation looks more likely with price increases larger, more widespread, and the result of several factors. The dramatic growth in the money supply from the various COVID relief efforts has resulted in the basic inflation definition of more money chasing fewer goods. Additionally, disruptions creating supply constraints have made conditions worse. Events including the recent blockage of the Suez Canal, the Texas freeze, and a fire at a key Japanese semiconductor plan have all had far-reaching impacts on supply chains. The semiconductor disruption, in particular, seemed minor at the time but the fallout has been extensive as shortages of key chip components have halted the production of goods ranging from computers to autos.
Collectively, it would be hard for conditions to be riper to precipitate an increase in inflation unlike the markets have seen in more than a decade. The surge in demand, supply constraints, and the fast-growing base of cheap or free money all pressing at the same time has produced sharp price increases in the widest array of commodities and material inputs we can recall seeing in a very long time. So far, policymakers have not been overly concerned about the inflation risk, though we think consumers are just beginning to feel the real effects. The magnitude and breadth of price increases already in place virtually ensure a flow-through to a wide range of finished goods and consumables. Crops, energy, building materials, and computer chips are among the items experiencing large price increases and, in some cases shortages of supply, meaning there are few things where consumers will be able to avoid the consequences.
A less-than-temporary change in the inflation backdrop has important considerations for the investment landscape. Strategies that have focused on thematic investing and high growth (regardless of valuation) that have performed so well over the last several years could fall from favor while more traditional value stocks, commodities, and other physical assets may be better placed to protect against erosion of purchasing power. This change is a result of higher implied interest rates that reduce the present value of far-off expectations. Conversely, bonds and other fixed payment streams can see values suffer during times of rising inflation and, potentially as well, rising rates. The charts below from BlackRock provide some historical data on how different types of investments perform under different inflation scenarios.
In the fourth quarter of last year and the first quarter of this year, we added positions based on these factors across many of our strategies. These investments vary but include more traditional value companies and holdings in the energy sector and commodities. At the same time, we have guided our fixed income exposure lower by not reinvesting maturities and trimming holdings in many situations. Most of our fixed-income holdings have very short maturity horizons so a rise in rates has had a less negative impact on the current values, but we are sensitive to potential loss of purchasing power associated with these low-yielding instruments.
Inflation can be a very punitive, regressive form of tax which is why signs of increases deserve a lot of attention. Fighting back against inflation typically requires policymakers to raise interest rates to cool off the economy and the excess demand. The popular euphemism is taking away the punch bowl. However, right now policymakers the world over are trying to combat the impacts of COVID and have shown little appetite to be the spoiler by preemptively hiking rates. At the most recent Federal Open Market Committee policy meeting last month, Jerome Powell was quite adamant the Fed won’t raise rates here until they see a 3.5% unemployment rate and inflation averaging 2%. He stressed it would be preferred to see the economy “running hot” and inflation above 2% for a period before acting. Keeping rates too low in an environment such as this runs the risk of further fanning the inflation forces.
However, there are a couple of key impediments for policymakers to factor into any decision to increase market interest rate targets. First, any increase would run counter to current initiatives centered on stimulating the recovery and, second, raising rates may increasingly become a budgetary issue. Even though the current statistics are depicting a healthy bounce in jobs and economic activity, this is happening simultaneously with ongoing external infusions of cash. The latest round of stimulus payments of $1,400 for individuals and thousands more for many families hit just this past month while supplemental unemployment benefits have been renewed and remain in effect until September of this year. Hiking interest rates under these conditions seem highly contradictory and this factor alone may make it more difficult for the Fed to consider acting earlier than currently outlined.
Even as growth and/or inflation develop in a way that may justify an increase in interest rates, we believe an additional consideration could center on the corresponding incremental fiscal interest expense. Interest payments on the rising debt balance of the U.S. have not harmed the budget while rates have been steadily worked lower, however, as debt balances have ballooned higher in response to COVID, increasing rates on a significantly higher debt balance can become meaningful. The graph below shows interest expense as a percent of GDP, now under 2% and far less than the prior peak of almost 3.2% in the ‘90s, but already rising. If interest rates start trending higher rather than lower, the impact will be magnified and could be a hurdle for a rate increase decision. For reference, in the early ‘90s, total public debt was less than 60% of GDP, while now that figure is roughly 130% and rising (source: St. Louis Fed). With the total outstanding U.S. debt over $28 trillion, every 1% rise in the average interest rate paid will increase the interest expense as a percent of GDP by 1.4%.
The U.S. fiscal indebtedness is growing at a faster clip due to the actions taken to address the economic disruption of COVID and now through new policy measures intended to continue the recovery. Financial markets have not appeared overly concerned about the rise in fiscal debt around the world, though it typically is the type of issue of when it matters, it really matters. The forecast sees sizeable deficits for the foreseeable future. The chart below graphs the trend in deficit by year and shows that the first half of this fiscal year has witnessed the largest year-to-date funding gap in recent history. While we don’t think it has been the primary source of the rise in interest rates over the past several months, we do believe it is a contributing factor. As the funding gap increases, it necessitates finding more buyers for the bond issuance. Attracting more buyers with rates so low is a challenge, and thus the pressure on rates.
The Fed has been covering a significant portion of this gap for years now by printing money and purchasing bonds. At the most recent Fed meeting, Chairman Powell confirmed that they plan to continue to buy $80 billion worth of government bonds and $40 billion in mortgage bonds monthly – or $1.4 trillion per year. Looking at that graph again, one can see the Federal deficit is already $1 trillion larger than last year’s record level while Congress continues to look to pass additional, multi-trillion stimulus bills yet to be included in these figures. The numbers are big, and the current trend has only seen them become larger. According to our analysis, the plan for future commitments will need either higher market rates (with potential growth and market value consequences) or more Fed purchases to cover the obligations. The legislative proposals have looked at recouping some of the spending in taxes, but we think it will be a fraction of the additional spending. The trade-off in the legislative process is the more that is attempted to be recaptured through taxes, the greater the headwind on economic activity.
The unbridled willingness to spend stems from an economic theory spreading in Washington referred to as MMT (Modern Monetary Theory). The MMT believers contend deficits don’t matter and creating new money should be the primary focus to achieve full employment. Only when full employment is achieved would inflation be a risk, and this could be balanced through higher taxes. The theory is supported by the idea the U.S. has a long history of running deficits and expanding its debt without major issues. However, in the past, the U.S. has enjoyed very strong international demand for its debt as the leading global superpower and de facto world reserve currency. If that source of demand for our debt diminishes, it needs to be offset from either of two alternatives – domestic buyers of Treasury debt or the Fed. The following chart shows foreign investors have been reducing their relative holdings of U.S Treasury debt for several years now. In our view, the trend of growing deficits and more stimulus significantly increases the likelihood more intervention will be needed from the Fed. We believe these factors can feed back into the current inflation cycle in a more significant fashion than in the past.
Although we believe the case for rising inflation is more pressing now, the outcomes remain dependent on how future events unfold. Running monetary policy hot and generous legislative measures are actions that would bolster the inflation risks. Economic growth would presumably be higher with more money in the system, but it is likely to undermine the purchasing power of today’s dollars. Infrastructure spending, perpetual social programs (e.g., Universal Basic Income), and debt or other obligation forgiveness (student loans, rent) are proposals that could further increase government spending relative to receipts and lead down this path. Alternatively, if further measures fail to pass and monetary policy becomes more restrictive, the natural headwinds created by elevated commodity prices and higher rates would most likely slow the economy and again shift the discussion back to deflationary scenarios.
We would like to provide a forecast with high confidence on how the economy may continue to grow and its relationship to financial markets but with undetermined variables measuring in the trillions, any certainty is impossible. In stark contrast to this time last year, sentiment on the equity market is now extremely optimistic as people see the large improvements in various data. Making capital allocations in these environments requires careful consideration of associated risks. In our opinion, the recovery has significantly expanded the list of investment landscape sectors that could be described as a bubble. Assets that become the focus of a speculative bubble often go far higher than a prudent investor may expect, but most often the corrective side of the price chart is quite steep. In these conditions, we remind investors that sometimes means holding lower-yielding assets or cash to preserve value is the best long-term strategy. We believe markets will remain subject to several external factors yet to come this year. As these developments unfold, we will continue to communicate any changes in our view and corresponding implications for client allocations. We thank you for your continued trust in us and hope for good health.
Bradley Williams, Chief Investment Officer
Lowe Wealth Advisors
Please remember that past performance may not be indicative of future results. Different types of investments involve varying degrees of risk, and there can be no assurance that the future performance of any specific investment, investment strategy, or product (including the investments and/or investment strategies recommended or undertaken by Lowe Wealth Advisors, LLC), or any non-investment related content, referred to directly or indirectly in this newsletter will be profitable, equal any corresponding indicated historical performance level(s), be suitable for your portfolio or individual situation or prove successful. Due to various factors, including changing market conditions and/or applicable laws, the content may no longer be reflective of current opinions or positions. Moreover, you should not assume that any discussion or information contained in this newsletter serves as the receipt of, or as a substitute for, personalized investment advice from Lowe Wealth Advisors, LLC. To the extent that a reader has any questions regarding the applicability of any specific issue discussed above to his/her situation, he/she is encouraged to consult with the professional advisor of his/her choosing. Lowe Wealth Advisors, LLC is neither a law firm nor a certified public accounting firm and no portion of the newsletter content should be construed as legal or accounting advice. A copy of the Lowe Wealth Advisors, LLC’s current written disclosure statement discussing our advisory services and fees is available upon request. If you are a Lowe Wealth Advisors, LLC client, please remember to contact Lowe Wealth Advisors, LLC, in writing, if there are any changes in your personal/financial situation or investment objectives to review/evaluating/revising our previous recommendations and/or services. | https://www.lowewealthadvisors.com/reopening-surge/ |
In the context of executing military infrastructure works, such as levelling land for the establishment of military camps and connecting roads or constructing machine-gun nests and trenches, the remains of ancient residential and burial complexes were often revealed - and frequently destroyed.
Certain heights (artificial mounds) in the Axios valley, usually chosen by allied troops to set up camps, were the locations of prehistoric and proto-historic settlements or ancient tombs.
These fortuitous discoveries of archaeological sites resulted in singular small-scale excavations at specific sites, such as Chauchitza or Boemitsa, with the participation of various scientists, archaeologists, philologists, even physicians serving in the allied forces.
The French and English showed particular archaeological interest during the years of the Macedonian Front. The French Army of the Orient included an organised "Military Archaeological Service", with the main purpose of seeking and mapping archaeological sites in Macedonia, collecting findings from surface surveys and occasionally carrying out test trenches. Similarly, the English set up the "Archaeological Service" of the Macedonian Front.
A large part of the findings was transported for safe-keeping to Thessaloniki, either to the White Tower (Museum of the British Forces) or the Rotonda (Museum of the French Forces). Today, several of those antiquities are kept in the Archaeological Museums of Thessaloniki and Kilkis, while others had been transferred since that period to museums abroad (the Louvre Museum in Paris, the British Museum in London, the Ashmolean Museum in Oxford, the National Museum of Scotland in Edinburgh).
Those first peculiar excavations, in the din of the battle, aroused the interest of British archaeologists in particular, serving as the starting point for a more systematic and targeted subsequent archaeological and topographic survey in the region of Kilkis.
British archaeologists Stanley Casson and Walter Heurtley returned to Kilkis after the Great War had ended, in the 1920s, as members of the British School of Archaeology, to conduct excavations at various prehistoric archaeological sites, such as Chauchitsa, Limnotopos (Vardino), Axiochori (Vardaroftsa) and Kalindria (Kilindir).
The results of those inter-war and post-war pioneering surveys shed considerable light on the past of the broader region, largely unknown until then. To this day, they continue to serve as a reference point for subsequent studies and surveys in the area of Central Macedonia, especially on the pre-historic and the proto-historic period. | https://www.warandarchaeology.gr/en/excavation-trenches-within-the-war-trenches |
- Researching, designing, and implementing assigned tasks.
- Identifying areas for modifications in existing applications and subsequently developing these modifications.
- Writing and implementing efficient code to implement the required task.
- Perform quality assurance procedures and unit testing to assure the quality of the work delivered.
- Deploying and adhere to Brightskies software development tools, processes and metrics.
- Identify and troubleshoot issues and coding problems.
- Collaborate with members of the project team (including designers, testers and developers) to consistently improve functionality and user-friendliness of the developed applications.
- Bachelor’s Degree in Computer Science or related field
- 3-5 years’ experience in software development.
- Strong .NET / C# / NET Core / EF Core development experience.
- Experience in configuration and maintenance of Azure DevOps pipelines.
- Experience with relational & non-relational database design and development.
- Understanding of Agile methodologies
- Additional Skills with the below will be a Plus
- Experience with PostgreSQL.
- Experience with Azure services.
- Strong communication skills to effectively collaborate with other relevant team members or clients.
- Willingness to troubleshoot and desire to probe further to solve problems
- Eager to learn and explore different technologies. | https://brightskiesinc.com/jobs/software-developer-net-developer/ |
In Shanghai to check on the first shipment to leave the new factory
My latest trip to China started with a quick catch up with our production manager, Michael, over dinner last Wednesday evening after checking into my hotel here in Shanghai. We met again the next morning to head out to the factory on the outskirts of the city that produces our Chinese Classical furniture. At the end of last year our production unit here moved to a new location, not far from the previous facility but with bigger and better premises. This was the first chance I’d had to see the new site and I was pleased to see that everything seemed to have settled in very quickly.
The factory move took more than a full week to complete, and included moving the workers’ own personal possessions as well as machinery and materials. The vast majority of carpenters and other staff (including cooks and cleaners) live on site, returning to their home provinces sometimes hundreds of miles away for Chinese New Year and other public holidays. A move like this therefore provides even more of a logistical challenge than it would do in the UK.
A few months after the move (and after the long New Year holiday), the new factory is very much up and running. With just one or two exceptions all of the workers from the previous factory made the move, so the skill base and experience in making Shimu furniture has been maintained.
I had timed this visit to Shanghai so that I could inspect the pieces that are to be included on our next container, due to be loaded on Wednesday this week and shipped a few days later. Most of these pieces were already finished, other than the final hardware being added and last minute checks. As always we have several ‘made to order’ items due to ship out on this container. These pieces were all finished to the woodwork stage so that I could make final checks myself on the designs, and discuss the finish to be applied where this was not standard.
|Hand painting a Shanxi Butterfly Screen||Polishing a Ming Carved Screen||Applying an undercoat to Yoke-Back Side Chairs|
|Carved Coffee Table at the woodwork stage||Smoothing the rattan panel on a Carved Coffee Table||Sanding the lattice shelf on a Carved Coffee Table|
It was great to be involved at first hand at this stage of the production process, a chance I rarely get as I am normally in the UK relying on photos and communication from the staff here in Shanghai. Along with Michael and the head of the ‘lacquering’ staff I was able to make specific tweaks to a lacquer, adding small amounts of yellow, red and black to the original grey colour the factory had produced. The objective was to achieve a particular colour (Farrow & Ball ‘Mole’s Breath to be exact), that one of our interior design customers has specified for a client’s TV cabinet. After forty minutes or so of repeatedly mixing lacquers, loading into a spray gun and applying the colour to a wood sample, we had managed to reach something very close to the paint sample provided by the designer. Allowing for the fact that a last polish and layer of varnish will darken the final colour slightly, we should be able to get an almost exact match.
It was great to see the various stages of the production process – everything from the woodwork completion to sanding, sealing, polishing, all the various stages of lacquering and finishing, right through to adding the brass hardware and final touches. By the time this blog post is published everything will be finished, checked and packed ready for loading in a day or two.
I spent the next couple of days checking out some new designs and a huge array of accessories with Michael and other staff. The factory here provides furniture for the internal Chinese market as well as for Shimu, and as a fairly recent venture the owners have launched a new brand together with an interior designer to offer a broader selection of products for the home, with showrooms being set up in Shanghai and other major cities around China. Over the coming months and years we plan to offer many of these products as part of the Shimu range in the UK and Europe, so look out for the new collections of lamps, ornaments, wall art and other home décor later this year.
More to follow soon as I head to Beijing to catch up with our suppliers there and to source Chinese antiques for our next container. | https://www.shimu.co.uk/blogs/news/in-shanghai-to-check-on-the-first-shipment-to-leave-the-new-factory |
For the second year in a row, a survey for Newsweek magazine has ranked the Jewish General Hospital as the top healthcare facility in Quebec and among the top five in Canada.
The second annual ranking was undertaken as part of a broader look at hospitals around the world. The magazine surveyed 21 countries, including Canada, the United States, Germany, Switzerland, Singapore, Israel and Sweden.
The rankings are based on recommendations from medical professionals, results from patient surveys and key medical performance indicators.
“It is extremely gratifying that the JGH has again been named the leader in Quebec and among the best in Canada,” says Dr. Lawrence Rosenberg, President and CEO of CIUSSS West-Central Montreal.
“In whatever success we achieve on behalf of our patients, the credit belongs to members of our staff in all fields, as well as our generous donors and dedicated volunteers.
“For more than eight decades, the hospital has been a beacon of diversity and inclusion, having been founded to provide care and employment to people of all backgrounds. Its non-discrimination policy was among the first in Quebec, serving as an example for other institutions to follow in the public and private sectors.
“The JGH has also been tireless in its efforts to provide patient-centred care that is instilled with compassion and attention to the emotional needs of patients and their families. At the same time, staff continuously strive to improve quality through technological innovation and scientific research.
“These qualities are intrinsic to all of the facilities that deliver health care and social services in our CIUSSS.”
The survey was conducted by Newsweek in partnership with Statista Inc., a global market research and consumer data company. | https://jghnews.ciussswestcentral.ca/jgh-again-named-a-leader-in-annual-newsweek-survey/ |
Sagebrush study targets diversity to improve sage grouse habitat
The University of Idaho is conducting a study to reduce old stands of sagebrush to improve sage grouse habitat. Too dense sagebrush reduces new sagebrush, grasses and forbs and does not provide good cover for sage grouse. Pretreatment work has already been done, mechanical removal is set for end of this month. Data to be collected and evaluated next spring.
Thinning old sagebrush stands to allow younger sagebrush, native grasses and forbs to grow might provide more high-quality habitat for sage grouse, a candidate species for protection under the Endangered Species Act.
University of Idaho extension faculty members have already set a plan in motion to test that theory with a rangeland study funded by an $8,000 grant from the David Little Endowment for rangeland research.
The study is focused on evaluating the effects of mechanical treatments to reduce dense sagebrush cover and enhance sage grouse habitat, said Amanda Gearhart, University of Idaho rangeland specialist.
“A lot of people have the misconception that all sagebrush is good sage grouse habitat, and in reality that’s not always true,” she said.
As sagebrush gets denser, it tends to get decadent on the bottom. The sparse lower canopy doesn’t provide good cover for sage grouse to hide or nest, and dense stands of sagebrush reduce both diversity and quantity of native, perennial forbs and grasses that are critical for sage grouse survival, she said.
Lack of fire, due to better fire suppression as humans move further onto rural landscapes, has allowed for dense stands of older, less productive sagebrush, she said.
The study will test two mechanical treatments to remove sagebrush, with the possibility of a third chemical treatment, on approximately 640 acres of private land located in the Medicine Lodge area near Dubois in eastern Idaho. The Lawson Aerator and Dixie Harrow mechanical treatments are designed to crush the sagebrush and reduce cover.
Gearhart and her team of university students have already laid out treatment and control blocks and completed pretreatment measurements, collecting production and plant community data. Treatments are scheduled to be done at the end of September, and post-treatment data will be collected next spring.
Gearhart and John Hogge, UI extension educator for Jefferson and Clark counties, will be conducting the biological study. They will be joined by Neil Rimbey, UI extension range economist, who will study the costs associated with the treatments.
The sagebrush that is in the project site is not very productive sagebrush, Hogge said.
Removing the sagebrush will allow younger sagebrush, native grasses and forbs to grow, and the team is optimistic those habitat changes will attract more sage grouse, he said.
“There are a lot of insects that are in the area when there are lots of grasses and forbs. Sage grouse need those during late brood rearing,” he said.
Sagebrush reduction could also provide more forage for domestic livestock, he said.
Not only does the project study the biological effects of sagebrush removal but also the costs associated with the treatments.
“Some studies have started to look at these mechanical treatments but most do not include the economic aspect, which is why we wanted Neil Rimbey to join us,” Gearhart said.
As the team’s range economist, Rimbey will estimate the costs of applying each treatment, which will be of particular use to landowners and livestock producers who want to use these treatments to improve their land, she said.
The team anticipates data from the study may be used by federal and state land management agencies.
“Our primary objective is to improve sage grouse habitat, and we want to provide landowners with useful economic and ecological information about how to treat large tracts of sagebrush that are dense in cover,” Gearhart said.
| |
We can and must take immediate action to address the root causes of the situation and transform the structures that control how food is produced, distributed, consumed and disposed of.
As things stand, protracted crises, including those triggered by conflict and other humanitarian crises, as well a rapidly increasing population, are leading us to increasing dependence upon food imports. These often interrelated risks, combined with economic shocks, can also undermine livelihoods and lead to high acute food insecurity for millions. The situation is exacerbated by poverty, wide-spread inequalities, and the COVID-19 pandemic’s impact.
The region is also under increasing pressure from the effects of the climate crisis, extreme drought, and the degradation natural resources. These factors further aggravate the impact and severity of shocks and reduce resilience.
The NENA region has a long way to go to achieve the UN’s Sustainable Development Goal 2 – targeting Zero Hunger by 2030. In 2020, 59.3 million people were undernourished in the region alone, which corresponds to 14.2 percent of the region’s total population.
Around 165 million of the region’s inhabitants live in rural areas, where the majority of the poor have to put up with inadequate basic services, low opportunities for innovation, limited access to productive infrastructure, services and value chains, and a lack of available jobs.
Increased migration to the region’s cities has been fueling the ever-growing number of urban poor. Many are young people who are often unable to find the opportunities they desire.
The NENA region must deal with structural issues that make it difficult to feed a growing population. Our agrifood systems have failed to provide healthy diets. The food is high in calories but not enough nutrition, leading to stunting, obesity, and micronutrient deficiencies.
Our agrifood systems must be transformed to become more efficient, inclusive, resilient, and sustainable. Science is clear: scaling up alone is not enough. We need structural changes and we must ensure that they happen quickly.
It is important to bring everyone to the table as the first priority. It is up to policymakers to find solutions that will help transform the future of the Near East and North Africa agrifood industry. To implement these solutions, it is necessary to form broad partnerships with all stakeholders including academia and civil society.
Only eight planting seasons remain before the 2030 deadline for achieving Agenda for Sustainable Development. FAO has been advocating for a comprehensive and cohesive strategy to reach these 17 goals, with agrifood system at its center. Solutions and problems are interdependent. Future generations will not have access to our natural resources unless we can end poverty and hunger and promote sustainable agrifood systems. We must also strengthen the resilience of rural communities. To change our future, we need to rethink how we agrifood system.
There are many short-, medium-, and long term actions that we can take now to create sustainable, inclusive and healthy agrifood system. To support rural transformation, it is important to harness the potential for technology and innovation across agrifood value chain. To encourage changes in consumption patterns, reduce food loss, increase land restoration, and reforestation, there must be standards and incentives. It is important that water productivity be increased while water withdrawals are limited in agriculture.
If the most vulnerable are not taken care of, we won’t be able reach our shared goal to eradicate hunger. Investing more in agriculture in countries facing complex emergencies, which often have the worst food crises in world, can save lives and help to protect livelihoods. It can also lay the foundations for future resilience and recovery. To ensure sustainable transformation of agrifood, development efforts must be complemented by peace and climate actors.
FAO has decades of experience in both humanitarian and development programmes. FAO’s focus is on strengthening resilience. This allows it to simultaneously address the multiple risks and vulnerabilities that face populations. It also meets immediate humanitarian requirements, enabling them to be better prepared to deal with the next shock or stressful event.
Greater solidarity and cooperation between countries and regions is key to ending hunger, ending food insecurity, and ensuring sustainability. We must work together in a coordinated, efficient, and effective manner. Many of these win-win solutions require peace.
The Food and Agriculture Organization of the United Nations (FAO) will continue to support countries in their efforts to work closely with international organizations, academia and the private sector.
The 36th Session at the FAO Regional Conference in the Near East (#NERC36) will offer an opportunity for the Ministers For Agriculture from the region, to meet in Baghdad 7 and 8 February to discuss the issues and priorities, and to take responsibility for transforming agrifood system for the Sustainable Development Goals.
The FAO Strategic Framework 2022-2031 Regional Conference will be an important step towards the implementation. It will ensure better production, better nutrition and a better quality of life for everyone.
* The writer is the Director-General of the Food and Agriculture Organization of the United Nations (FAO).
Short link: | https://retime.org/innovation-in-agrifood-systems-is-needed-to-feed-people-and-sustain-the-environment-in-the-nena-region-opinion/ |
As a student, you’ll have to have a solid plan for studying laid out. Along with trying to find the time to study, you’ll also need to figure out what is the correct place to help you get your work done and be the most productive. Finding the best study environment is essential because it’ll help you be as productive as possible while retaining your needed information. Every person has preferences as to which area works best.
Some students prefer to be in a quiet place with minimal distractions, while other students like to be at home or in the middle of a busy café. Between studying at home or studying in the library, there are benefits and drawbacks to each space. We’ll outline them for you below to help you make the best choice for your needs.
Studying in the Library
Studying in the campus’s library, or a public library, brings several benefits to the table, especially if you have trouble focusing in your normal environment in the home. It’s an academic setting with a quiet atmosphere that allows you to keep your mind focused on your work. However, this may not be the correct environment for your learning style, so let’s weigh the benefits and drawbacks below.
Benefits
- It’s a nice way to meet other students and find or form a study group
- The library offers an academically centered learning environment that can help you stay focused on your studies
- You have no chores in sight to distract you from your work
- If you have to dive into research, you have a huge amount of resources in easy reach
- If you tend to isolate yourself, studying here is a good way to get out around people with no pressure to socialize directly
Drawbacks
- You’ll typically find a fairly large crowd
- You have to pack your laptop, books, papers, and whatever else you need and carry it to and from the library
- There could be noise from other patrons
- Other students can present distractions by walking to the bathrooms, sneaking in a phone call, or coming and going
- Since there is no kitchen nearby most of the time, you’re limited to vending machines for food or beverages for as long as you stay to study
Studying at Home
Studying at home offers a comfortable, unique experience if you’re burned out from being in class all day, suffer from social anxiety, or you have young kids to take care of. However, just like studying at the library, this may not work for you. We’ll outline the benefits and drawbacks for you below.
Benefits
- You can make your own rules and study in your personal space
- You can multi-task between home responsibilities and homework
- You can wear comfortable clothing and study day or night
- There’s no need to carry notes, computers, or books to and from your study area
- Pets can come in and provide stress relief on short breaks
Drawbacks
- You have higher chances of procrastinating due to all of the distractions at hand
- Getting up and doing chores could make it take longer to study
- Pets can want attention or be a nuisance when you’re super focused on your studies
- Roommates or family members can interrupt you
- All of the noise might not mesh well with your learning style
Which Option is Better?
If you look at the list, you may decide that there are more positives with studying at home over going to the library. You could prefer to study in your own personal space, surrounded by your things. If not, it’s easy to tip the list more toward the library.
If you’re someone who identifies strongly with one way of studying, you most likely already know which environment will be more productive for you. You could switch it up too. For example, you can study at home during the week and go to the library on the weekends. Another option is forming a tight-knit study group with close friends and going to the library on the weekends.
If you find that studying in the library is better, you don’t want to overdo it. Spending all of your free time somewhere on campus can wear you down and cause you to burn out. You have to make a point to schedule some downtime to recharge and do something that isn’t stressful.
Find Your Studying Spot on Our Campus
No matter if you like to study in your room or in the library, you’ll find the perfect spot on our campus. We encourage you to reach out for more information. | https://www.csinow.edu/student-blog/library-or-your-room-where-should-you-study-for-tests/ |
Cut and pasted from an email from the RSC:
This autumn, join the Royal Shakespeare Company to celebrate the 400th anniversary of the making of the King James Bible with a new play by acclaimed playwright David Edgar.
Written on the Heart tells the story of translators William Tyndale and Lancelot Andrewes, both working to the same end but in very different circumstances eighty years apart.
Written on the Heart plays at the Swan Theatre, Stratford-upon-Avon, from 22 October 2011 to 10 March 2012 and tickets start from just £14.
Follow the link for more details: | https://mycroft-brolly.livejournal.com/147702.html |
The implementation of the Common Core Standards has just begun and these standards will impact a generation that communicates with technology more than anything else. Texting, cell phones, Facebook, YouTube, Skype, etc. are the ways they speak with their friends and the world. The Common Core Standards recognize this. According to the Common Core Standards website, www.corestandards.org, "skills related to media use (both critical analysis and production of media) are integrated throughout the standards." Therefore, there will be a need for students to integrate multimedia into their schoolwork to the point where they are just as comfortable creating a video piece to get their ideas across, as they are writing a research paper. There is also a need to teach students how to use media responsibly. Educators understand the importance of integrating media and technology into their curriculum, but it's not always easy to come up with a way to do it. In this article, the author describes the digital storytelling project which she developed in collaboration with 7th grade ELA teachers at Farnsworth Middle School. This digital storytelling project could easily be adapted to fit different grade levels, lesson plans and subjects. | https://eric.ed.gov/?id=EJ991810 |
INTRODUCTION: Obesity is associated with increased all-cause mortality, but weight loss may not decrease cardiovascular events. In fact, very low calorie diets have been linked to arrhythmias and sudden death. The QT interval is the standard marker for cardiac repolarization, but T-wave morphology analysis has been suggested as a more sensitive method to identify changes in cardiac repolarization. We examined the effect of a major and rapid weight loss on T-wave morphology.
METHODS AND RESULTS: Twenty-six individuals had electrocardiograms (ECG) taken before and after eight weeks of weight loss intervention along with plasma measurements of fasting glucose, HbA1c, and potassium. For assessment of cardiac repolarization changes, T-wave Morphology Combination Score (MCS) and ECG intervals: RR, PR, QT, QTcF (Fridericia-corrected QT-interval), and QRS duration were derived. The participants lost on average 13.4% of their bodyweight. MCS, QRS, and RR intervals increased at week 8 (p<0.01), while QTcF and PR intervals were unaffected. Fasting plasma glucose (p<0.001) and HbA1c both decreased at week 8 (p<10(-5)), while plasma potassium was unchanged. MCS but not QTcF was negatively correlated with HbA1c (p<0.001) and fasting plasma glucose (p<0.01).
CONCLUSION: Rapid weight loss induces changes in cardiac repolarization. Monitoring of MCS during calorie restriction makes it possible to detect repolarization changes with higher discriminative power than the QT-interval during major rapid weight loss interventions. MCS was correlated with decreased HbA1c. Thus, sustained low blood glucose levels may contribute to repolarization changes. | https://bmi.ku.dk/Medarbejderoversigt/?pure=da%2Fpublications%2Fmajor-rapid-weight-loss-induces-changes-in-cardiac-repolarization(c5f623dd-1eae-4416-8330-abe90c6a95ac).html |
“Neutrality” is a major goal of proper ergonomic design. It simply means working in a position where our muscles are not subject to unnecessary stress or strain while sitting or standing at a desk or workstation. Most ergonomics consultants place “work in neutral postures” at the top of their “ergonomic essentials” lists.
A neutral posture maintains the natural S-curve of our spine. Keeping elbows parallel to the work surface and neck aligned with the spine contribute to a neutral posture. As the illustration shows, you’ll want to consider several dimensions to achieve proper posture for both standing and sitting.
Typical workstation tasks that require ergonomic neutrality:
- Laboratory analysis such as looking through a microscope
- Using lab test equipment
- Assembling products
- Computer input tasks
- Product test and repair
- Packing, shipping
- Receiving inspection
- Etc.
First, Know the Correct Working Height
The height at which a task must be performed is key to achieving ergonomic neutrality. A handy calculator recommends the best work surface height for sitting and standing positions based on a person’s physical height.
It’s also important to shift frequently between seated and standing positions throughout the workday to minimize fatigue and long term health problems. Many tasks may be performed from either a sitting or standing position.
When it comes to choosing an ergonomically correct workstation that will be used by different people and/or for different tasks, convenient height adjustability becomes essential. It’s really the only way to achieve a neutral posture whether sitting or standing and when a workstation is being used by personnel of different heights.
Example: Ergonomic Neutrality on the Manufacturing Floor
Height adjustability is not just for the office or lab. It’s equally important on the manufacturing floor. In the illustration below, the operator has adjusted the electric workstation to a height (yellow circle) that allows her to work with elbows and lower arm parallel to the work surface (blue circle).
When another worker arrives for the next shift, he or she can raise or lower the work surface in just a few seconds.
Configurabilty is Important, Too
When a workstation is used by different people, height adjustability is virtually mandatory. Traditionally, height adjustable workstations have been available in only one flavor: a standalone rectangular workstation such as the one shown above. However, in many work environments a longer (in-line) or corner height-adjustable workstation may be preferable. For example, a workstation for testing and repairing small mechanical or electronic products. One wing of the ‘L’ is for test and repair. Computer data entry occurs on the other wing.
The Direct Drive™ Height Adjustable Workstation.
Popular mechanisms for adjusting workstation height include geared cranks, pneumatic, hydraulic, or electric motor drives. For sheer ease of use and long term reliability, electric height adjustable tables are superior to other mechanisms.
The Workplace Direct Drive™ Electric Height Adjustable Workstation is the ideal way to achieve the ergonomic neutrality requirements of multiple shift, multiple user, and/or multi-task functions.
The unique Direct Drive mechanism features a low pitch leadscrew design providing unparalleled stability—and eliminates maintenance. The height of the worksurface remains firmly locked in place even when power is lost.
All height adjustment components are fully enclosed in each leg unit—ideal for use in industrial, lab, or clean room environments.
The Direct Drive design allows extensive configuration flexibility: standalone, in-line, corner, mobile workstations. Accommodates the same range of accessories as Workplace manually height adjustable workstations.
Call 1-800-258-9700 or contact your Design Specialist today to see how easy it is to achieve ergonomic neutrality with a Workplace solution. | https://workplacenh.com/2017/03/09/height-adjustability-key-ergonomic-neutrality/ |
5 Activities to Build Community in the Classroom
It’s back to school season, teachers! You know what that means… Lots of introductions, icebreakers, and getting to know your new students on a personal and academic level. It also means that students will begin to build their classroom community, setting the tone for the rest of the school year! I am a firm believer in fostering a positive and strong community in my class. Younger students need that sense of community to build their sense of responsibility, accountability, and compassion for their fellow classmates. It’s the secret formula to ensuring your year goes smoothly and benefits every student!
Why are activities to build community in the classroom important?
A classroom community is more important to an educational environment than you might think! When a class shares strong feelings of community, they are more likely to hold other students accountable and take ownership of their own behavior. Day-to-day activities will go more smoothly when students work together and understand the value of their interpersonal relationships. They will be more likely to follow rules and work together to get tasks done. Finally, a good classroom community helps students act with kindness, compassion, fairness, and respect towards each other.
1. Start the morning with a song
Using songs in the classroom is a way to build classroom community while learning. Younger students respond really well to songs, and I’ve found that to be true with my first graders. Singing is great for bringing students together and helping them focus on whatever task is at hand. I love using my Songs for Your Classroom with my first graders, but they also work well for kindergartners and second graders.
2. Hold morning meetings
Morning meetings are a fantastic way to hold engaging discussions each morning and help your students focus on the day ahead. I use these Digital Morning Meetings for 1st Grade with my kids. The meetings let students sing a morning song together, answer a question of the day, get moving, and even answer a math problem! Students always get excited to start their day with this fun activity. Plus, the meetings are perfect for both in-person and distance learning.
3. Ask discussion questions
When students get to know each other on a deeper level, it helps foster a sense of understanding, compassion, and form lasting friendships. They also encourage critical thinking skills and help each student develop a sense of self. For this reason, I love including discussion questions in my classroom routine or in my morning meetings. To make it easier for you, I have two sets of discussion questions that are perfect for elementary students. Here is Set One and here is Set Two! I also wanted to include these Morning Meeting Roll and Talk Discussion Questions and Digital Discussion Questions designed for K-2nd grade students.
4. Let students design their perfect classroom
Here’s a fun one: this Design Your Perfect Classroom Activity is an amazing way to let your students envision their perfect classroom and build community while doing so. This project allows your students to think critically about what would be the best learning environment for them and their classmates. Students will work together through 10 steps to complete a model of their perfect classroom. The steps of this project include individual and group brainstorming, research, planning, revising, and creation. Students can reflect on their final products, too! A project based activity like this is a fantastic way to introduce group work to your students and encourage a collaborative environment.
5. Incorporate community building into your first day of school
A positive classroom community starts on the very first day! When planning your first day activities, I recommend including activities that help students get to know each other. For example, my Digital Back to School Activity lets students share their “Silly School Name” with each other. Trust me, this game is sure to bring about a lot of laughs!
Conclusion
Encourage students to build a positive community and take ownership with these 5 activities to build community in the classroom. From silly songs to collaborative projects, these resources will help you construct a compassionate and productive community that will last all year long! | https://funinfirst.com/5-activities-to-build-community-in-the-classroom/ |
Tammy Hart, I.D.D.P, CAPS is a graduate and tutor of QC Design School, as well as a Certified Aging in Place Specialist from the National Association of Home Builders. She is the owner and award-winning designer for the Designer Chick Co., and the Past Director on the National Board for DDA (formerly CDECA).
Have you ever walked into a space and felt instantly calm, as if everything was effortlessly in place? On the other hand, have you ever been in a room that felt chaotic and uncomfortable, but couldn’t pin-point why?
We all have.
What causes those feelings is the equal distribution of weight in a room. This is known as “balance”, and it’s one of the foundational principles of interior decorating and design. Balance is achieved in 3 ways:
- Symmetrically
- Asymmetrically, and
- Radially
Before we talk about balance, though, lets talk about visual weight!
What is “Visual Weight”?
When we talk about weight, we’re not only referring to how much an object actually weighs. It’s also about how much our eye thinks it weighs. This is impacted by:
- Color
- Size
- Shape
- Proximity
- Texture
- Grounding
For example, imagine that you have two identical couches. One is in cream; the other, in brown. The cream couch will appear visually lighter, whereas the brown one will appear heavier. The same will happen if your change the textures of the material (i.e., leather versus microfiber).
Take the same couch and place block legs on it. On the other couch, place hairpin taller legs. You’ll find that the taller legs will appear visually lighter. When sitting closer to the ground, a couch can seem heavier than the one standing taller.
The style of the furniture and living space can also impact balance. For instance, imagine you have a couch of Scandinavian style with straight lines, versus a Traditional style one with rounder lines. The rounder lines will create a heavier weight. Seat depth will also impact the visual depth. The deeper the depth, the heavier the visual weight.
When creating balance in a space, taking visual weight into consideration is important in when determining the placement of objects, too. First and foremost, consider both the architectural features of the room and the focal point you are considering using.
The 3 Ways to Achieve Balance
Symmetrical Balance
Symmetry is achieved by creating mirrored images. Symmetrical balance works bests in traditional, formal, farmhouse, and transitional spaces. You can take a room and draw a line down it or across it (these are the vertical and horizontal axes), to see the visual pairs.
The perfect equation of symmetrical balance is created by using visual pairs, and balancing objects with visual weight. Just keep in mind that too much symmetry in a space can cause the room to feel monotonous and predictable.
Asymmetrical Balance
Asymmetry can be a bit challenging sometimes. It’s most commonly seen in modern, eclectic, and bohemian-styled spaces. If you’re attempting to create asymmetrical balance for the first time, it can feel a tad tricky, since it doesn’t use visual pairing of items.
Rather, it relies on pairing the elements of the room through visual weight. For instance, you may rely on common colors, or item sizes. Because we’re used to seeing symmetrical balance, asymmetrical balance will challenge your creativity and take you out of your comfort zone.
If done right, it can create a wow factor! It can often lead others to think, “I never thought of doing that; that’s awesome!” But if done wrong, the room will feel chaotic and unpleasant to be in.
Radial Balance
Radial balance isn’t necessarily related to a particular style. More so, it’s related to the spaces with circular centerpieces, where everything radiates out and around from the center focus of the space. The best example to use is to consider a circular dining table with a chandelier over top, and chairs working their way around and out from the table.
How to Become a Master of Balance
Understanding your client’s sense of style is always important. But when determining the type of balance you’ll use to design their space, it becomes essential. So, how can you properly understand what’s needed, and how to give your client what they want?
The best way to become a knowledgeable expert in any area of home design is to get training, and earn your interior decorator certification! Obtaining reputable training from an accredited design school is a guaranteed way to elevate your skill-set, give you an edge over the competition, and impress your clients!
QC Design School, for instance, offers an Interior Decorating Course that can provide you with an interior decorator certification in as little as 3-6 months! You’ll be instructed by a real-life professional designer, and in addition to learning the ins and outs of creating balance and visual weight, discover all the other key elements to successful home design! | https://www.qcdesignschool.com/2020/07/your-interior-decorator-certification-creating-balance/ |
In reference to common Lophatherum, it seems that there is no confusion at all. But it is a different story when it comes to its Chinese name of Dan Zhu Ye, which refers to at least three different species of plants. This situation may cause chaos and lead to wrong medication in TCM practice. So, it is necessary to tell them apart.
Also known as Herba Lophatheri in Latin, Lophatherum herb means the dried stems and leaves of Lophatherum gracile Brongn., a perennial grass in Poaceae family. Its length is between 25 to 75cm. Stems are cylindrical, with joints, yellowish green surface, and hollow sections. Sheath is cracked. Leaves are lanceolate, 5 to 20cm long, 1 to 3.5cm wide, with light green or yellow-green surface. Sometimes leaves are wrinkled and curled. Veins are parallel, with smaller transverse veins, which form a rectangular grid, especially on the lower surface. It is lightweight and flexible, and with slight odor and taste.
As mentioned above, its Chinese counterpart of Dan Zhu Ye may cause dispute because it involves other two different herbs – the leaves of Phyllostachys nigra Munro var. henonis (Miff)Stapf. ex Rehd. and the whole plant of Commelina communis L. As a matter of fact, Lophatherum gracile Brongn. was initially recorded in Ben Cao Gang Mu (Compendium of Materia Medica). As you can see now, Dan Zhu Ye containing formulas that are before Ming Dynasty refers to the leaves of Phyllostachys nigra Munro var. henonis(Miff)Stapf. ex Rehd only, which now are not available in pharmacies and generally replaced with Lophatherum. If fresh bamboo leaves do need, they usually are collected temporarily. As for the Commelina communis L., it has different efficacy and thus should not be mixed with Lophatherum leaf.
Chemical constituents of Lophatherum are mainly the triterpene compound, including Arundoin, Cylindrin, Taraxerol, and Friedelin. In addition, the overground part contains phenolic component, amino acids, organic acids, and sugars. Lophatherum gracile leaf and stem contain arundoin, cylindrin, taraxasterol, friedelin, stigmasterol, β-sitosterol, and campesterol. Roots contain arundoin and cylindrin.
This herb has very high pharmaceutical value. In recent years, Chinese medicine and food science found that its foliage chemistry has good nutrition and health benefits. According to the test conducted, its main functional factors include flavonoids, phenolic acids, amino acids, manganese, zinc and other trace elements. And experiments showed that these active ingredients can contribute a lot of health benefits, including removing the body’s reactive oxygen species that prompt human aging, inducing the activity of antioxidant enzymes inside the organism, enhancing the body’s resistance to stress and fatigue, improving memory, and delaying senescence and so on.
5. It increases blood sugar.
Traditional Chinese Medicine (TCM) believes that this herb is sweet, tasteless, and cold in properties. It covers three meridians, including heart, stomach, and small intestine. Its prime functions are to clear heat and relieve fidgetiness and induce diuresis. And main Lophatherum uses and indications are polydipsia in heat disease, inhibited voidings of reddish urine, stranguria, and mouth sores.
Lophatherum has been used in Chinese medicine since Ming Dynasty. That’s to say it has about 500-year history of medical uses. During this time a lot of valuable knowledge has been accumulated and inherited. And its numerous formulas are its major presence.
This formula comes from Yi Xue Xin Wu (Understanding of Medical Science). It is basically formulated for heat disease damaging liquid, vexation, and thirst. Other major herbal ingredients are Huang Qin (Baical Skullcap Root), Zhi Mu (Anemarrhena Rhizome), Mai Men Dong (Ophiopogon Tuber), and so on.
This prescription comes from Shang Han Lun (On Cold Damage). It is primarily used for lingering heat and injuries of both Qi and essence after the cure of typhoid fever, febrile disease and summer-heat disease, manifested as fever, sweating, annoyance, vomiting due to inverse Qi, dry mouth and thirst, insomnia due to dysphoria, red tongue, less tongue coating, and rapid string pulse. Other key herbs include Shi Gao (Gypsum), Ban Xia (Pinellia Rhizome), Mai Men Dong, Ren Shen (Ginseng Root), and so on.
This formula is from Wai Tai Mi Yao (The Secret Medical Essentials of a Provincial Governor). It is exclusively used for acute conjunctivitis. Other major herbs are Huang Lian (Coptis Rhizome), Da Zhao (Jujube), Zhi Zi (Gardenia), Che Qian Cao (Plantago asiatica herb), and so on.
The median lethal dose of Lophatherum gracile herb for mice is 0.645g/10g and the median lethal dose in mice is 64.5g/kg.
TCM wise Dan Zhu Ye should be used with care in the cases of no excess fire and damp-heat. And it shouldn’t be used during pregnancy and in the cases of weakness with cold and frequent micturition due to kidney deficiency.
Where can I access scientific studies that support claims for the modern pharmacologic actions of herbs listed on this site?
Please refers to The Chinese Pharmacopoeia and other TCM literature, which can be available on Internet. | http://www.chineseherbshealing.com/lophatherum-dan-zhu-ye/ |
Michael Jordan is widely regarded as the greatest basketball player of all time. LeBron James carries the name of "The King". Neither of them has faced each other. Jordan played his last game in 2003 - two months before James became the No.1 pick in the NBA draft.
"He's very talented. But he's young, and there's a lot of things he doesn't know."
CREDIT: YOUTUBE/BASKETBALL NETWORK
.
"I think he's doing fine on his own. Obviously, you guys are comparing him with me. They did it with me when I came up with [Julius Erving] and Oscar [Robertson]. But I think the thing about LeBron and what makes him hopefully survive is that he does what's best for LeBron, not what people expect him to do, who think he should be Michael Jordan.".
CREDIT: YOUTUBE/ BASKETBALL NETWORK
"He's made his mark in Cleveland. I know New York fans would love to have him, but you need a lot more components than just one player. He's done a heck of a job in Cleveland, and they deserve to have him there." | https://www.essentiallysports.com/stories/-nba-basketball-news-five-times-michael-jordan-addressed-lebron-james-comparisons/ |
Reading Paul Aguirre-Livingston’s article “Dawn of a New Gay” last Friday, I immediately thought of another storm that had been dominating my Facebook news feed a couple of days earlier. That particular outrage was over Sun News Network anchor Krista Erickson’s interview/attack with internationally acclaimed dance artist Margie Gillis. In the televised interview, Erickson “questioned” Gillis about her use of public moneys that she has acquired from government granting agencies. The line of questioning was not horrifying in and of itself. What was shocking was the aggressive mode in which Erickson went about it. She repeatedly cut Gillis off. She insulted her and mocked her. At one point, the visibly shaken Gillis spoke out about what she perceives as the disintegration within Canadian society of the capacity to feel compassion and understanding for others. Gillis characterized the lack of support in the general population for public arts funding as a symptom of this overall disintegration. Gillis also implied that the overt hostility and anger that she was facing at the hands of a (so-called) journalist on live television was further evidence of that very absence of compassionate behaviour.
Sadly, I have to agree with Gillis’s observation. The recent election of Rob Ford as mayor and the Conservative majority in federal Parliament strike me as clear indications that our sense of collective responsibility and caring for one another is diminishing and is quickly being replaced by self-interest and greed. A vision of the Canadian social contract that is built on concepts of compassion, altruism and inter-dependence no longer seems to form the basis of our society. Instead, all I see is a political landscape that is dominated by selfishness, fear of difference and a rejection of inclusiveness.
I would describe Aguirre-Livingston’s article as yet another symptom of this societal shift. There is nothing intrinsically wrong with Aguirre-Livingston’s own experience of being gay, and he certainly has every right to live out his sexual identity in whatever way he chooses. I take no issue with that. What is disturbing is the article’s complete lack of awareness/acknowledgement of his position inside the broader world. The article does not speak to the context in which he is living, nor does it place his experience in relationship to a broader community of homosexuals except in the most superficial of ways. His personal and extremely privileged position inside the gay experience is not the norm. Nowhere close, in fact. For a widely distributed media outlet to imply (or outright state) that it is the norm is outrageous. It is the equivalent to some posh Rosedale resident saying that there is no poverty because they do not directly experience it in their own lives and a newspaper publishing it as some kind of legitimate description of a social reality. It is self-centred, short-sighted and irresponsible. It denies our interconnected existence as a community – a community that is made up of a multiplicity of experiences. It feeds a growingly disconnected society of isolated individuals who have little understanding of the larger social realities that they are a part of. This is frightening to me.
For a long time, our identities as homosexuals were formed by adversity. Today, a select group of people from the community have grown up without these experiences of oppression. As a result, certain aspects of their identities are different from members of previous generations. This should be a good thing. This is an important thing for us to talk about as a community. It is unfortunate that this article did not live up to the task. The result is an apathetic, cynical and, often, contemptible piece of “journalistic” writing that further divides a community that has struggled and continues to struggle for equality, acceptance and basic freedoms. I appeal to The Grid’s and Aguirre-Livingston’s sense of social responsibility as they continue to define their role in the world and assess how they can contribute to the betterment of others. | https://www.dailyxtra.com/my-response-to-dawn-of-a-new-gay-33915 |
In an age of innovation and rising artificial intelligence (#AI), are major functions facing commoditization, disintermediation, or automation as synthetic intelligence increases? What will remain after data science advancements improve the efficacy of automation, as consumers increasingly move into a fully digital delivery model?
The start of this series will evoke some visceral responses—my apologies as we begin. This series is designed to question our foundations—in an age of disruptive innovation, are bankers necessary? We can see this taking place in the real estate markets where increasingly digitalization and automation are challenging the “necessity” of traditional commissions estimated to be over $75 billion per year—or approximately .30% of U.S. GDP.
We can witness this in the once tightly linked corporate bond market where in one investment bank shed 99% of their staff—due to innovation, data, process, and technology automation.Additionally, there is #Gartner which has been quoted that “Most banks will be made irrelevant by 2030” with “80% of financial firms” out of business or competitively swallowed—about 1.3 million out of the 2.05 million people now employed will be out of work. Others believe that across all of finance—of which banks comprise just a segment—over 6 million workers will be displaced by 2025. Where will these workers fit in now that algorithms have replaced their job description?
Yet, there is another school of thought. Others believe that the very technology putting people out of work and forcing them to seek alternative employment will boost job markets. Even as these workers struggle for skill relevancy and face rising personal costs for reskilling, the disparity of what #FSBO (financial services and banking organization) leadership should be doing when it comes to innovation, reskilling of work forces, and products and services offered to customers, span alternatives across diametric poles.
That is the idea behind this series—to explore the challenges bankers face. Not to say bankers don’t matter—but to understand what DOES matter—to the customer, to the economy, and to the bankers. Is it not better to ask the questions ourselves then to react to market changes?
As consumers move 100% digital, as neobanks which have no physical footprint gain market share in an age of financial commoditization, as branch closures accelerate due to uncompromising legacy investments and strategy, should banks which have the intellect and experience be leading the disruptive transformation? Or, are we going to wait and watch institutional numbers dwindle to “irrelevancy” (i.e., the decades long trend of losing 200 to 250 banks every year)?
All this begs a “few” questions regarding financial innovation. First, does innovation create banking strategy or does banking strategy drive innovation? Secondly, as global populations move to near 100% subscription to online banking products by 2030 (under 50% now), will governments step in to enact greater personal security for transactions and identities? Thirdly, will banks emerge, a reformulation if you will, as data science and analytic enterprises feeding retail, transportation and even educational institutions? | https://into-our-future.simplecast.com/episodes/are-bankers-necessary |
Today, the multilateral system as we know it is under threat. What is certain, though, is that in the future, in order to survive, multilateralism will have to answer the aspirations of people and meet the needs of mankind as a whole. In fact, multilateralism itself is far from obsolete – but the institutions that serve it are old.
In a recent article, political scholar G. John Ikenberry1 depicts the evolution of the international project over the past seven decades. He interprets present times as a transition period to a new multilateral order. Yet, in the midst of current uncertainties, it is not easy to predict which form it may take. Until now, multilateralism was seen as a “methodology or machinery for responding to the opportunities and dangers of modernity”. It has responded to traditional state power structures. From now on, the global system needs to evolve if it is to be capable of better serving humanity. Global threats are taking an almost existential dimension: climate, security-related or socio-economic. These threats transcend the boundaries of traditional institutions. Geopolitical shifts and growing transnational networks of businesses, civil society and political alliances indicate that people aspire to a deep global reorganisation of powers, ideologies and human activity.
At the GCSP, we believe it is essential to adapt the existing systems in order to better respond to present and future challenges. We think that a shared understanding and common values – the ingredients that draw together networks – are at the core of agile institutions. For the past 30 years, we have helped transform individuals and organisations, equipping them with the mind-sets, skill-sets and tool-sets necessary for keeping the world safer.
The GCSP’s Geneva Leadership Alliance is a partnership with the Center for Creative Leadership (CCL) dedicated to advancing the effectiveness of leadership in public, private, and civil society organisations to achieve collective outcomes. Through our work with several international organisations and governments over the past three years, we have observed that there continues to be a lack of strategic prioritisation and investment to develop and better prepare current and future leaders at all levels.
Leading and influencing teams and organisations of very diverse people doing complex work is challenging and requires specific mind-sets and skills.
The tendency to give priority to subject-matter expertise over leadership capability is often widespread. Many organisations are now recognising that this imbalance needs to be addressed, especially as their activity is getting more complex and funding sources become scarcer.
Interrupting ways of working that are ineffective, questioning them, exploring alternatives and adapting, is a complex endeavour.
There is an over focus on procedures and processes and often not enough clarity on ‘what’ the desired outcomes should be.
The desire and ability to view issues from perspectives outside of one’s own silo and develop new understandings of prevailing challenges is often lacking.
Judgement about when, what and how an organisation needs to adapt is hampered by the inability to read an often turbulent landscape and understand the wider eco-system.
The fear of negatively impacting sources of funding and public perception leads to leadership decisions based on ‘protection & preservation’ rather than on ‘foresight and adaptation’.
Effective and efficient multi-layered organisations require individuals who can anticipate, adapt and be resilient. These individuals need critical thinking, imaginative and innovative problem-solving skills and attitudes. They represent the fundamental components that form the “machinery” of the international system. Ultimately, institutions that have ambition to shape the new international system will need to create spaces and opportunities for their people to thrive, to work well in high-performing diverse teams, to engage across institutional and cultural boundaries and build trust with wider constituents.
There is huge potential to harness the collective intelligence across the entire system: indeed, the system is powered by people. And people need to adapt to changing circumstances, while also maintaining structures – and at times, redesigning these structures. If multilateral organisations are to survive, they will owe it to their people. | https://www.gcsp.ch/global-insight/shaping-next-multilateral |
Market Data After COVID-19 and the 2021 MPFS: Will There Be a New Normal?
This Featured Article is contributed by AHLA's Hospitals and Health Systems Practice Group.
- April 09, 2021
- Lindsay Beets , CBIZ, Inc.
The health care compliance environment is experiencing unprecedented times. The events of 2020 will impact physician-hospital employment in 2021 and beyond. This Bulletin will summarize several notable events of 2020 and the expected impact on market data and physician compensation fair market value for compliance in the hospital employment setting.
The Environment
COVID-19 Pandemic
Now over a full year into the pandemic, it is clear how drastically the past year upended the health care environment. The Coronavirus Disease 2019 (COVID-19) pandemic tested health care systems’ patient capacity in emergency rooms and ICUs, supply chain, staffing, and telemedicine infrastructure. Shelter in-place orders and personal protective equipment (PPE) shortages forced systems to delay or cancel elective procedures. Patients, fearing the pandemic, refrained from seeking routine and non-emergent medical care. The normal and customary efforts of providers were shifted in an effort to combat the pandemic.
2021 Medicare Physician Fee Schedule (2021 MPFS)
As expected, the changes to the 2021 MPFS became final on December 2, 2020. Included in the final rule, among other things, are updates to work relative value unit (wRVU) weightings, a reduction to the Medicare conversion factor and changes to coding and documentation requirements for Evaluation and Management (E&M) codes.
Consolidated Appropriations Act, 2021 (the Act)
Signed into law on December 27, 2020, the year-end stimulus package known as the Consolidated Appropriations Act, 2021 (the Act, H.R. 133), is a $2.3 trillion spending bill that includes $900 billion in stimulus relief following the economic fallout of the COVID-19 pandemic. This legislation includes substantive changes to numerous programs affecting the health care industry and temporarily waives Medicare’s budget neutrality requirement.
In order to keep changes to the MPFS budget neutral, changes are generally made to either RVU weightings or the Medicare conversion factor. In practice, this typically results in one or more specialties/provider groups coming out ahead, while others come out behind. The 2021 MPFS attained budget neutrality by decreasing the conversion factor from $36.09 to $32.41. Subsequently, the Act waived the budget neutrality requirement and revised the 2021 MPFS conversion factor to $34.89, softening the blow of Medicare reimbursement cuts. While the final conversion factor is lower than originally anticipated, the end result is more complicated and must consider weight changes and other factors, described in more detail later in this Bulletin.
The combination of the COVID-19 pandemic response, 2021 MPFS, and the Act will have a material impact on market data sources, the economics of employing physicians, and productivity-based compensation plans.
Impact on Market Data Sources
Varying responses to the pandemic across geographic locations and health systems will likely have a material impact on frequently-referenced market data sources. The impact of COVID-19 related shutdowns and slowdowns will begin to be reflected in the market survey data published in 2021, and will likely be amplified in surveys published in 2022 and after as a result of the conversion factor and wRVU weight changes found in the 2021 MPFS.
As an example, suppose that an individual physician’s compensation remains constant due to support from their employing hospital. Further, assume that same physician’s productivity falls due to COVID-19-related causes. That physician’s survey responses would include their total compensation from all sources, presumably at a level comparable to prior years, but wRVU production that may be significantly lower than in the past. Multiply this phenomenon by hundreds or thousands of providers within a specialty and one can expect (1) similar levels of total compensation; (2) lower levels of wRVU production; (3) lower levels of professional collections; and (4) increased compensation per wRVU.
Alternatively, if a physician did not receive any type of support and their productivity fell as a result of COVID-19-related causes, their survey responses would look quite different. Compensation per wRVU may be consistent with prior years, but total compensation would lower as would wRVU production.
While both scenarios likely took place among survey respondents, a number of resources were available to physicians in private practice and employed by health systems. As such, it is likely that compensation levels were not impacted in proportion to drops in productivity and the market data will be impacted accordingly.
Impact on Productivity-Based Compensation Plans
It is not uncommon to see physician-hospital employment arrangements that tie compensation per wRVU or wRVU productivity thresholds to specific data points published in market surveys. The variability of these types of arrangements will likely cause them to become inherently riskier given the uncertainties around the published market data. Higher total levels of compensation may draw additional scrutiny. Valuators may attempt to normalize the market data and regulators may become less comfortable with absolute reliance on the market data. Hospitals may initially attempt to match the higher conversion factors and then find themselves in a position where they have to reduce compensation per wRVU once the market data normalizes.
Benchmarking 2021 work efforts to market wRVU productivity levels may become problematic. Changes in wRVU weights will limit the comparability to historical data. Furthermore, some arrangements call for the calculation of wRVU productivity using the then-current MPFS. These scenarios present significant risk to physician-hospital employment arrangements and their compliance with Fair Market Value (FMV) and commercial reasonableness (CR). Finally, some hospitals may face unfavorable economic circumstances.
The magnitude of the impact on productivity-based compensation plans will vary based on specialty and physician compensation level. Physicians that rely heavily on Evaluation and Management (E&M) codes, largely primary care and medical specialties, have an opportunity for higher wRVUs, and therefore higher compensation, without any change in work effort. The table below shows the weight increases among established patient visit codes (CPT codes 99212-99215).
Virtual visit codes 99441–99443 have been updated to mirror the weights of in-person visits, resulting in increases of 156–180%.
Presumably, physicians on productivity-based compensation plans with no change in their conversion factor will receive more compensation for performing the same E&M codes. The impact on hospitals is not as simple. To illustrate this point, the tables below present the changes in reimbursement, compensation, and margin available to cover overhead for each of the established patient visit codes 99212–99215.
Assuming a physician compensation rate of $50 per wRVU and 3,500 visits per year, increased weights combined with the revised Medicare conversion factor will largely offset higher physician compensation levels as related to E&M codes. As shown in this first scenario, the hospital’s reimbursement increase would fund the increase in physician compensation.
However, increasing the compensation conversion factor to $57 per wRVU starts to result in a more material impact. The hypothetical hospital’s reimbursement increase is no longer sufficient to cover the increase in physician compensation and the hospital’s margin would take a hit of approximately $10,500 per full-time equivalent (FTE).
Physicians who perform less E&M codes and more procedures will likely not see the same impact on compensation. The 2021 MPFS weights for procedure codes are largely unaffected but will be reimbursed at the new $34.89 conversion factor, resulting in a shortfall to hospitals.
It remains to be seen whether or not the one-time increase to the conversion factor will remain in place into 2022. If the 3.75% increase expires and wRVU weights are unchanged, hospitals could see significant margin erosion in 2022.
Recommended Actions
- Avoid recommending or drafting compensation arrangements with elements tied to specific market survey data points;
- Advocate for production-based compensation amounts based on 2020 wRVU weights to maintain comparability and limit impact on compensation;
- Use caution when referencing market data based on 2020 inputs;
- Discuss with valuation experts strategies to help navigate these unprecedented times and develop new strategies to recruit and retain top talent while mitigating organization risk; and
- Start an open and transparent dialogue with provider staff about these issues and plans to address them.
Maintaining FMV compliance will require keeping a close eye on market data sources and physician compensation arrangements as the totality of the impact of the events of 2020 becomes apparent. | https://www.americanhealthlaw.org/content-library/health-law-weekly/article/1e44e35a-cfd2-4c92-8346-5a77cca7391c/Market-Data-After-COVID-19-and-The-2021-MPFS-Will |
DNA duplicates itself in a process called replication. In another process, translation, the genetic code is read and used to produce the proper proteins. These two events can occur simultaneously in a cell. The cellular machinery required for these processes follows along the same DNA strand, occasionally leading to collisions. In a recent paper published in Nature, researchers analyzed the consequences of these DNA collisions. They found that collisions can trigger mutations, leading to genetic changes and diseases.
Researchers from the Baylor College of Medicine and the University of Wisconsin developed a laboratory assay that made it possible to track mutations in a bacterial gene. They used the bacteria Bacillus subtilis and introduced a gene that caused the DNA machinery to run in the same direction, preventing collisions. In a different group, the researchers introduced the gene in a way that forced them to collide.
In the bacteria engineered to have replication-translation collisions, mutation rates were significantly higher. Most of these mutations were insertions, deletions, or substitutions. Substitution mutations are point mutations in which a single nucleotide is swapped for another, potentially resulting in the wrong protein. Insertions and deletions can be far more serious. They can lead to what is called a frameshift mutation, in which an additional or deleted nucleotide causes the entire DNA reading frame to shift. This can cause a number of proteins to change. In addition, the researchers found that most of these mutations occurred in the promoter region of the gene. This region is normally responsible for regulating gene expression.
The researchers concluded that replication-transcription collisions lead to higher mutation rates. Mutations can lead to disease, such as cancer, but they’re also the driving force behind evolution. These new findings can help us understand how these mutations occur and potentially help us cure genetic disorders.
REFERENCE
Sabari Sankar et al. The nature of mutations induced by replication–transcription collisions. Nature (2016). | http://naturalsciencenews.com/2016/06/30/dna-collisions-lead-to-higher-mutation-rates/ |
At Strategy Management Partners, our mission is to make strategies happen for our clients.We focus on helping private and public sector organisations execute their strategies successfully. We work in partnership with our clients to develop pragmatic solutions and support them throughout their strategy execution journey. Our expertise, working style and ability to understand organisations’ unique context and challenges delivers success for our clients.
Strategy execution and management is the on-going process of developing, planning, implementing, testing, reviewing and adapting an organisation’s strategy. Our expertise is underpinned by our experience in driving measurable transformation, change, and performance improvement.
Organisations that use a formal approach for implementing their strategy consistently outperform their peers. Good leadership combined with processes and behaviours that embed a strong management discipline deliver world class strategy execution.
What are your most pressing strategy execution and change challenges?
- Is the need for change understood and agreed?
- Are the strategic priorities clearly expressed?
- Does the whole business understand how to interpret the strategy to deliver results?
- Is the leadership visibly committed to the implementation?
- Do strategic improvement initiatives have adequate focus, momentum and sponsorship?
- Is the organisation equipped to follow-through on its commitments to deliver results?
- Is the business engaged, teams motivated and strategy integrated with operations?
We would welcome an opportunity to talk about how we might best be able to support you.
Meet the team at Strategy Management Partners. | http://strategymanagement.com/ |
Q:
A red and a blue die are thrown. Both dice are loaded (that is, not all sides are equally likely).
A red and a blue die are thrown. Both dice are loaded (that is, not all sides are equally
likely). Rolling a 2 with the red die is twice as likely as rolling each of the other five numbers
and rolling a 4 with the blue die is twice as likely as rolling each of the other five numbers.
a. What is the probability of each outcome of the red die?
b. What is the probability of each outcome of the blue die?
c. What is the probability that the sum of the numbers on the two dice is 6?
My attempt
a. Red die probability
1- 1/7
2- 2/7
3- 1/7
4- 1/7
5-1/7
6- 1/7
b. Blue die probability
1 - 1/7
2 - 1/7
3 - 1/7
4 - 2/7
5 - 1/7
6 - 1/7
c) Sum of nos. as 6 for both die
Various possible combinnations (Red, Blue) with probabilities as below
(1,5) - (1/7(1/7)
(2,4) - (2/7)(2/7)
(3,3) - (1/7)(1/7)
(4,2) - (1/7)(1/7)
(5,1) - (1/7)(1/7)
Overall probability - 4. (1/7)2 + (2/7)2 = 16.32%
A:
Good job. Your answer is correct.
| |
Inselbergs and the Genesis Flood
In previous posts, we have looked at several instances of geomorphology, such as planation surfaces and the like. Today we are going to focus on inselbergs. No, Inselberg was not a musician in a German rock band (that I know of), but it the word came from German and means island mountain. We have a passel of them in the USA (such as Stone Mountain), but there are many of them around the world, and they puzzle deep time geologists.
You could be eyeballing a plot of land and suddenly see a huge bump or series of bumps. According to uniformitarian geology, everything happens over long periods of time. Geologists cannot adequately explain how they appeared. To make matters worse, inselbergs are showing signs of erosion that do not fit deep time speculations. The global Genesis Flood provides the most logical explanation for what we observe — which means that Earth is far younger than secularists and stalkers want to believe.
|Eningen unter Achalm, Baden-Württemberg, Germany|
Credit: Wikimedia Commons / Vux (CC BY-SA 3.0 DE) (enhanced)
As the world’s continents were uplifted from the waters of the global Flood, they were greatly eroded. During this massive erosion, the rocks that weren’t pulverized were transported hundreds of kilometres toward the oceans. The enormous power of the receding water, relentlessly shaving off the surfaces it flowed over, left behind large flat areas known as planation surfaces, along with coastal Great Escarpments, large natural bridges, and freestanding arches. Scientists studying conventional geomorphology find all these features puzzling because they ignore the Flood and rely only on slow erosion over millions of years, which does not work.To read the rest, click on "Inselbergs — Evidence for rapid Flood runoff". I think Inselberg would be a good name for a rock band. | https://www.piltdownsuperman.com/2018/12/inselbergs-and-genesis-flood.html |
With the launch of the new iPhone 13 this past week, consumer electronics giant Apple may have updated its formidable line-up of 5G smartphones with faster processors and state-of-the-art cameras with low-light functionality, but the biggest update yet to its products is likely to be in its iOS App Store.
Just days before the unveiling of the iPhone maker’s latest gadgets, the long-drawn courtroom showdown between Apple and video game Fortnite reached an epic ending on Sept 10. Though the knockout winner was Apple, which won on nine of the 10 counts, Fortnite’s creator, Epic Games, won the last one on points. The case is being appealed, and unless the two sides reach an out-of-court settlement, more courtroom drama is expected as subsequent appeals could take years.
In her 185-page ruling, Federal Judge Yvonne Gonzalez Rogers declared that Apple was not a monopoly either as a distributor of smartphone apps or within its in-app payment (IAP) solutions. She ruled that smartphone users have viable alternatives to games consumed through Apple’s iOS apps, switching costs are low, barriers to entry are not high, iPhone maker’s IAP requirements are legal, and nearly all of its App Store policies are valid. Apple’s requirement for developers to use the App Store to distribute iOS games and apps makes the iOS more competitive against Android and gives consumers choices in the marketplace, the judge noted. | https://www.theedgesingapore.com/views/tech/epic-dent-apples-app-store-business-model |
Recently,
“In both GATHER1 and GATHER2, avacincaptad pegol consistently showed a treatment effect with the first measurement at month 6 that was persistent and continued to increase over time, with observed efficacy rates of up to 35%,” said Pravin U. Dugel, MD, President of
The FDA’s Breakthrough Therapy designation decision was based on the 12-month primary efficacy endpoint data from the GATHER1 and GATHER2 pivotal studies which evaluated the safety and efficacy of ACP in patients with GA located inside and/or outside of the clinical fovea. Per the special protocol assessment (SPA) agreement for GATHER2, the FDA required the mean rate of growth (slope) in GA area from baseline to month 12. These results showed a significant treatment difference of 35% (p=0.0050; GATHER1) and 18% (p= 0.0039, GATHER2) compared to sham using observed (non-transformed) data; and 28% (p=0.0063; GATHER1) and 14% (p= 0.0064; GATHER2) using square root transformation. In both GATHER1 and GATHER2 there were no events of serious intraocular inflammation, vasculitis, or endophthalmitis.
About Geographic Atrophy
Age-related macular degeneration (AMD) is the major cause of moderate and severe loss of central vision in aging adults, affecting both eyes in the majority of patients. The macula is a small area in the central portion of the retina responsible for central vision. As AMD progresses, the loss of retinal cells and the underlying blood vessels in the macula results in marked thinning and/or atrophy of retinal tissue. Geographic atrophy, the advanced stage of AMD, leads to further irreversible loss of vision in these patients. There are currently no
About Avacincaptad Pegol
Avacincaptad pegol (ACP) is an investigational drug that has not yet been evaluated by any regulatory body for safety and efficacy. ACP is not authorized for any indication in any country. ACP is a novel complement C5 protein inhibitor. Overactivity of the complement system and the C5 protein are suspected to play a critical role in the development and growth of scarring and vision loss associated with geographic atrophy (GA) secondary to age-related macular degeneration (AMD). By targeting C5, ACP has the potential to decrease activity of the complement system that causes the degeneration of retinal cells and potentially slow the progression of GA.
About the GATHER Clinical Trials Supporting Breakthrough Therapy Designation
ACP met its primary endpoint in the ongoing randomized, double-masked, sham-controlled, multicenter GATHER1 and GATHER2 Phase 3 clinical trials. These clinical trials measured the efficacy and safety of monthly 2 mg intravitreal administration of ACP in patients with GA secondary to AMD. For the first 12 months in both trials, patients were randomized to receive either ACP 2 mg or sham monthly. There were 286 participants enrolled in GATHER1 and 448 participants enrolled in GATHER2. The primary efficacy endpoints in both pivotal studies were based on GA area measured by fundus autofluorescence (FAF) at three time points: Baseline, Month 6, and Month 12. This primary endpoint is reflective of photoreceptor death and disease progression. In GATHER1 and GATHER2 combined, the most frequently reported treatment emergent adverse events in the 2 mg recommended dose were related to injection procedure. The most common adverse reactions (≥ 5% and greater than sham) reported in patients who received avacincaptad pegol 2 mg were conjunctival hemorrhage (13%), increased IOP (9%), and CNV (7%).
About Breakthrough Therapy Designation
Breakthrough therapy designation is intended to expedite the development and review of drugs for serious or life-threatening conditions. The criteria for breakthrough therapy designation require preliminary clinical evidence that demonstrates the drug may have substantial improvement on at least one clinically significant endpoint over available therapy. Approaches to demonstrating substantial improvement include the following:
- Direct comparison of the new drug to available therapy shows a much greater or more important response
- If there is no available therapy, the new drug shows a substantial and clinically meaningful effect on an important outcome when compared with a placebo or a well-documented historical control.
- The new drug added to available therapy results in a much greater or more important response compared to available therapy in a controlled study or to a well-documented historical control.
- The new drug has a substantial and clinically meaningful effect on the underlying cause of the disease, in contrast to available therapies that treat only symptoms of the disease, and preliminary clinical evidence indicates that the drug is likely to have a disease modifying effect in the long term (e.g., a sustained clinical benefit compared with a temporary clinical benefit provided by available therapies).
- The new drug reverses or inhibits disease progression, in contrast to available therapies that only provide symptomatic improvement.
- The new drug has an important safety advantage that relates to serious adverse reactions (e.g., those that may result in treatment interruption) compared with available therapies and has similar efficacy.
A breakthrough therapy designation conveys more intensive FDA guidance on an efficient drug development program, an organizational commitment involving senior managers, and eligibility for rolling review and priority review. FDA will review the full data submitted to support approval of drugs designated as breakthrough therapies to determine whether the drugs are safe and effective for their intended use before they are approved for marketing.
About
Forward-looking Statements
Any statements in this press release about the Company’s future expectations, plans and prospects constitute forward-looking statements for purposes of the safe harbor provisions under the Private Securities Litigation Reform Act of 1995. Forward-looking statements include any statements about the Company’s strategy, future operations and future expectations and plans and prospects for the Company, and any other statements containing the words “anticipate,” “believe,” “estimate,” “expect,” “intend”, “goal,” “may”, “might,” “plan,” “predict,” “project,” “seek,” “target,” “potential,” “will,” “would,” “could,” “should,” “continue,” and similar expressions. In this press release, the Company’s forward-looking statements include statements about its expectations regarding the results and implications of the clinical data from its GATHER1 and GATHER2 trial of ACP in geographic atrophy, its development and regulatory strategy for ACP, including its plans to complete its submission of a new drug application to the
References
-
U.S. Food and Drug Administration. “Guidance for Industry: Expedited Programs for Serious Conditions - Drugs and Biologics, 2014”. Available at https://www.fda.gov/regulatory-information/search-fda-guidance-documents/expedited-programs-serious-conditions-drugs-and-biologics. Last accessed: November 16, 2022.
View source version on businesswire.com: https://www.businesswire.com/news/home/20221117006059/en/
Investor Contact:
Senior Vice President, Investor Relations
[email protected]
or
Media Contact:
Senior Director,
[email protected]
Source: | https://investors.ivericbio.com/news-releases/news-release-details/iveric-bio-announces-fda-has-granted-breakthrough-therapy |
K7 Computing Private Limited announced that it’s Founder and Chairman, J Kesavardhanan, has been elected as the first Chief Executive Officer (CEO) of The Association of Anti-Virus Asia Researchers (AVAR). Founded in 1998, AVAR is an independent non-profit organization established with the mission of countering the spread of malware and mitigating its impact. It is the pre-eminent anti-malware research conference in the APAC region, facilitating interactions among anti-virus and malware researchers from across 17 countries, including Australia, China, Hong Kong, India, Japan, Korea, Philippines, Singapore, Taiwan, UK, and the USA.
K7 Computing has been an active member of AVAR and a strong advocate of collaboration and cooperation among cybersecurity researchers and experts in their shared mission to protect consumers and enterprises from cyber threats.
Commenting on his appointment, J Kesavardhanan, Founder and Chairman, K7 Computing said; “It is my privilege and honor to become the first Chief Executive Officer of AVAR. AVAR is a one-of-a-kind platform, created for anti-virus and malware researchers to engage in collaborative work to prevent the spread of cyberattacks and damage caused by it. It gives me an opportunity to work towards ensuring that proper exchange of information and knowledge on various aspects of cybersecurity take place between the member organizations consistently.”
He further added;“Cybersecurity is a subject of National significance in India as well as in many other countries. The Governments are putting their best foot forward in dealing with cybersecurity incidents as well as in preventing them. Going forward, I aim to reach out to them to get their active participation in AVAR.”
Kesavardhanan, plans to develop AVAR as a platform that will cater to industries which are most affected by cyber threats and inspire them to get associated with AVAR.
On the new addition to the AVAR leadership team, Mr. Seiji Murakami, Founder, AVAR said; “I am very glad to see AVAR flourishing so well. AVAR has been embraced well by the cybersecurity vendor and anti-malware research communities during the past 21 years.” He further added; “Kesavardhanan has a great understanding and experience of the cybersecurity domain. I am certain, through his knowledge, drive, and leadership, he will take AVAR to the next level of increased collaboration, knowledge sharing and improved outcomes. I wish him huge success in his new role.”
K7 Computing recently hosted the annual AVAR Conference in Goa, India and previously hosted AVAR in Chennai, India in 2013. The International security conference in its 21st year saw one of the largest gathering of cybersecurity experts, researchers, product developers and eminent speakers from around the world, engaging in panel discussions and paper presentations. | https://www.dqindia.com/association-anti-virus-asia-researchers-avar-appoints-j-kesavardhanan-first-ceo/ |
lack of crew.
Like all navies, the SAN was short of cash from the recovery from the Depression. The ship was to be the biggest ship so far built by the Southern African Navy. The 12" turrets were second hand and sourced from the Australis Navy from their conversion of the Agincourt (14x12") to the aircraft Carrier Van Diemen. The turrets were refurbished to increase elevation and thus range (from 16 degrees/20,000 yards to 30 degrees/28,000 yards). Despite the bits and pieces fitted to it, the Wildebeest was a fine looking ship with a modern cruiser style bridge and funnels. For all of its WW2 service it never left the Indian Ocean and its training duties. The highlight of its war being the tracking down and destruction of two of the Germanic States merchant raiders that were loose in the Indian Ocean. It was the Walrus aircraft and early radar that allowed this to happen. The aircraft allowed the ship to keep its distance while the aircraft interrogated the 'enemy' ship, passed the information to the Wildebeest which then checked the information with the Admiralty and either passed the ship as clean or detained the ship for further investigation using the large launch and boarding parties, or sank it.
The Southern African Navy simply termed the ship as a Training Cruiser. It
was too lightly gunned to be in the battleship/battlecruiser category, it fitted
nicely into the CB designation. The lack of heavy armour showed it was not meant
to take on ships of capital rank. Something like a Deutschland class pocket
battleship would have been a good match.
|Displacement||17,000 tons std, 20,500 tons full load|
|Length||639 ft|
|Breadth||75 ft|
|Draught||24 ft|
|Machinery||4 shaft geared turbines, 60,000shp|
|Speed||27 knots|
|Range||8,500 miles at 14 knots, 2,500 miles at 26 knots|
|Armour||
|
5.5" belt, 3" deck, 7"/5" turrets, 1.5" secondarys
|Armament||As built
|
6 x 12" (3x2)
10 x 4.5" (5x2)
12 x 2pd (3x4)
4 x 20mm (4x1)
8 x 0.5"mg (2x4)
|As refitted to 1941
|
6 x 12" (3x2)
10 x 4.5" (5x2)
16 x 2pd (1x8, 2x4)
22 x 20mm (4x2, 14x1)
|Aircraft||3|
|Torpedoes||6 x 21" (2x3)||removed 1941|
|Complement||800 + trainees|
|Notes||HMSAS Wildebeest (1935) stricken from Navy List 1960, scrapped 1964.|
TS Wildebeest in its original configuration, with refitted parts looking more like mid 40's than completed mid 30's. | http://alternateuniversewarships.com/Royal%20Commonwealth%20Navy/CB%201935%20Wildebeest/CB_1935_Widebeest-TS.htm |
THE First World Warchanged politics and a consensus emerged about the need for a national housing policy.
In line with Lloyd George’s promise during the 1918 election to provide ‘Homes for Heroes’ an Act of 1919 required local authorities to provide working class housing with government subsidies.
Glasgow estimated a need for 57,000 new houses in the aftermath of the war to deal with its acute overcrowding problem.
Across the city, a number of municipal ‘schemes’ quickly emerged on undeveloped ground.
The earliest, including Riddrie, Mosspark and Knightswood, were built under the Ordinary scheme, which was a bit of a misnomer as these were the elite in Glasgow housing stock, and with high rents, rarely housed the working classes.
Built between 1920 and 1927 on open fields to the west of Cumbernauld Road, Riddrie was the first of these housing schemes.
It comprised a mixture of semi-detached and terraced cottages with gardens and three-storey tenement flats.
All the houses had cavity walling and electrical servicing, an innovation at the time. Around 1000 houses were built, but most of them were allocated to skilled workers earning above-average wages.
Mosspark, built in 1924, was elite in terms of Glasgow housing stock in the inter-war period.
It had a generous subsidy, but its rents were kept high by the Scottish Office’s insistence on economic rents. Houses were allocated to ‘respectable’ professional or white-collar workers.
Knightswood was Glasgow’s largest housing scheme when it was built, with a total of 6714 houses.
The land was purchased from the Summerlee Iron Company in 1921 and the Council set about building a garden suburb.
The buildings included semi-detached, terraced and cottage flats, all limited to two storeys.
Provision of amenities often lagged behind the building of houses in Glasgow housing schemes, but Knightswood fared better than many other areas.
The Corporation acquires 148 acres for Knightswood Park in 1929.
In addition to the two bowling greens and four tennis courts, the park included a golf course, pitch and putt course, boating pond, running track and cricket pitch.
Four new shopping centres, eight churches and six schools were also provided.
A further act in 1923 enabled a large increase in the number of more affordable homes to be built as rents were subsidised.
Glasgow built ‘intermediate’ houses which were usually to a similar standard as the ‘ordinary’ houses, but rents were cheaper.
*Did you grow up on a Glasgow ‘scheme’? Send us your memories and photos. | https://www.glasgowtimes.co.uk/news/19277863.mosspark-knightswood-riddrie---glasgow-schemes-provided-homes-heroes-war/ |
All you need to create your outline are 60 index cards, a pen or a marker of some sort and a large space in which to spread out the cards.
This method utilises the Three Act Structure which is a useful structural tool not to mention an excellent way to outline a story.
I would suggest taking a quick note of the key points of the Three Act Structure before you begin, though it doesn’t really matter if you wait and do it at the end.
Take your 60 index cards, which you can buy from almost anywhere (I just make my own from whatever’s handy) and a pen and write out the 60 most important things that will happen in your novel. Just a single sentence to give an idea of what the scene or event will consist of is enough.
Since there are usually about sixty scenes in a novel (there can be more, or even less) it is a good idea to use 15 each for the first and third acts and 30 for the second. This is where the primary focus of the action will be and where a huge chunk of the story is told.
This part might take a bit of thinking, especially if you have not begun to outline your story yet, or even thought about what might happen from beginning to end.
Once you’ve written the single sentence event on all sixty cards, spread them all out in front of you. I use my coffee table because it’s large enough, though you can use a dining table or even the floor if you like.
Take a good look at all of your events. Are they all in the order they need to be in?
Now, remember that list of key points from the Three Act Structure that you may or may not have made a note of? Well, we’re going to use that now, so if you haven’t jotted down the points, you may want to do so now.
First, all of the parts that belong in the beginning, or Act I go in one pile, all those that belong in the middle, or Act II, go in another and finally, all of the events that belong in the end, or Act III, go in a third pile.
Next, take the scene cards in the pile for Act 1 and match them to the elements of this act (you won’t be able to match every card, but you’ll have a sense of where they’ll need to be placed in the story.)
For example, take the scene or event that you think is your exposition or set up. This is where the character is going about their normal, day to day life.
This scene should be first, and the scene that you think describes your inciting incident (the scene that changes the protagonist’s world or forces them out of their normality) – goes next, and so on and so forth until you matched up all the cards with the key points of the three acts as best you can.
Having sorted your cards into their respective acts and matched them with their corresponding points, you can go back to them and add any other information you need to.
A good tip here is to keep a blank sheet of paper handy and that way if you run out of space on the index card, you can mark which card you’re working on and continue on the paper. This is what I do when I use index cards because I never seem to have enough room. However, you don’t need to do this if you are able to use the space available or have a great memory.
You can add things like setting and which characters are involved or even things like which POV your scene will be written in along with any other information that jumps out of your head.
That’s it really. When you’ve done all of that, you should have a relatively detailed outline of your novel.
While this outline was presumably designed to be used for novel outlines, I don’t see why it can’t be modified to work for any form of writing.
A short story of 7 scenes and 10,000 words could conceivably be split up into a 2, 3, 2 formation of index cards for the beginning middle and end.
This is just an estimate, and of course, you can experiment with it yourself and see what works. The 7 scenes and 10,000 words are just what appeared when I googled ‘how many scenes are there in a short story.’
The truth is there is no set number, just as there is no set word count. I don’t think I’ve written a short story yet that was over 6,000 words.
Anyway… I’ve gone on for long enough. Thank you so much for reading, I really appreciate it!
Until next time, | https://georgelthomas.com/2017/07/11/60-index-cards-outline/ |
By David Wilfong, NDG Contributing Writer
Bishop Arts Theatre in Oak Cliff is currently running its annual “Down for #TheCount” festival, which is a celebration of women’s voices in theater, showcasing one-act plays by various female playwrights. This year’s production showcases works by Maryam Obaidullah Baig, Kristiana Rae Colon, Katherine Craft, Tsehaye Geralyn Hebert, Linda Jones and Ife Olujobi.
The performance is divided down between six one-act plays, running the gamut from a monologue (Jones’ “The Sound”) to a redneck tale-turned-South Asian-inspired dreamscape (Baig’s “Jo Chaho Tum”). All of which is carried out by a consistent cadre of performers. The disparate sourcing of material was aligned by a common theme.
“What made the process a little bit easier for me was the through-line the director, Miss Phyllis (Cicero) established at the beginning,” said actress Feleceia Benton. “The through-line of the whole show was about the lies that we tell ourselves, especially as women. And so I tried to keep that as the underlying thing that I thought about going from one character to the next. So I tried to find some congruence as I transitioned, to try to shift completely out of one character into the next.”
“Down for #TheCount” is not a show for younger audiences. It deals with very real themes such as drug abuse, unplanned pregnancy and racism. The staging is minimalist and the flow of the performance is carried by the strength of the acting performances.
Regular attendees of Bishop Arts Theatre will see familiar faces like the powerful Ash’lee L’Oreal Davis and Kenne Earl (both veterans of Bishop Arts’ production of “Ruined”), as well as newcomers like Ashley B. Jones, who opens the first act of the show.
For those who have never attended, the Bishop Arts Theatre Centre provides Dallas with an Off-Broadway-style intimate venue for taking in live theater performances in the heart of the growing North Oak Cliff district. In particular, the “Down for #TheCount” festival is a fast-paced flood of vignettes which is both thought-provoking and visually stimulating. It showcases up-and-coming playwrights, with a special emphasis on local talent.
“I wanted to make sure that every playwright was valued,” Cicero said. “In my production values, in my acting, in my directing; that every single playwright was a unit that was their own. However, I wanted a through-line for my audience. There needed to be a through-line, so that when we talk about women’s issues, we don’t go all over the place. We’re not scattered. It’s not an explosion. It is a through-line, and one of the through-lines that kept coming to me was lies and illusions. | https://northdallasgazette.com/2018/03/31/thecount-provides-voice-women-spectacle-dallas-audiences/ |
Delhi Transco Limited website complies with Guidelines for Indian Government Websites. This will enable people with visual impairments access the website using assistive technologies, such as screen readers. The information on the website is accessible with different screen readers, such as JAWS, NVDA, SAFA, Supernova and Window-Eyes Following table lists the information about different screen readers:
Information related to the various screen readers
Copyright © 2015 Delhi Transco Limited © All Rights Reserved
Website Last Updated on
11-Aug-2022
Visitor Counter: | http://www.dtl.gov.in/Screenreader/1_6_Screenreader.aspx |
In 2000, the United Nations drafted the Millennium Development Goals (MDGs) to address the challenges posed by environmental issues such as climate change, global warming and greenhouse gas emissions that have severely undermined human security and economic development. The seventh MDG is to ensure environmental sustainability. In 2015, the Sustainable Development Goals (SDGs) pointed out the responsibility borne by private enterprises in promoting environmental sustainability, while also encouraging civil society and governments to develop partnerships to that end. In the same year, the United Nations Framework Convention on Climate Change in Paris urged signatories to focus on helping less developed countries (LDCs) adapt to the impacts of climate change.
Taking the example of Latin America and the Caribbean where Taiwan has the most diplomatic allies, the UN has designated the region as highly vulnerable to the impact of climate variability. In recent years, the TaiwanICDF has sought to integrate Taiwan’s development experience and technological tools in the fields of agriculture, climate and disaster prevention in order to provide technical assistance and capacity building. This is intended to strengthen the capacity of partner countries to adapt and mitigate disasters in the face of climate change, while also promoting sustainable development and consumption in primary industries, using technology to facilitate sustainable resource management and improve post-disaster recovery and adaptive capacity.
The United States Agency for International Development (USAID) pointed out in its annual report that agriculture in St. Kitts and Nevis faces a number of problems: most farms are small and fragmented, industrial farming has become less profitable, agricultural labor costs are high, agricultural populations are aging and traditionally grown crops lack diversity. Following field studies by Taiwanese experts and their assessments of agricultural vulnerability to climate change of the country, they proposed the Enhancing Agricultural Adaptive Capacity to Climate Variability Project. The project focuses on three measures: Establish an early warning information collection mechanism, develop or introduce techniques to prevent and reduce crop disasters, and increase the dissemination of agricultural information. These are expected to improve the adaptive capacity and resilience of the agricultural sector in St. Kitts and Nevis to climate variability.
Energy efficiency and carbon reduction is already an important focal point in the development of national policies around the world. In addition to bilateral cooperation, the TaiwanICDF works closely with international organizations through green financing and loans, to jointly promote renewable energy and greenhouse gas reduction projects, shouldering its responsibility as a global citizen in environmental protection.
For example, the TaiwanICDF partnered with the EBRD in the implementation of Green Energy Financing Facility (GEFF) in Central and Eastern Europe, the Balkans, and Central Asia. Under the GEFF cooperation, the EBRD and the FIISF will jointly extend financing to Participating Financial Institutions which will finance eligible sustainable energy and resource efficiency investments. This program will address multiple market barriers to financing green technologies. It aims to scale up private sector investment in the more sustainable use of energy and other resources and climate resilience projects. Romania is an energy-intensive country. Most Romanian residential buildings are generally of older construction with low insulation, and have become one of the main reasons for the country’s high energy consumption. The program can provide finance to Romanian households to invest in green products and make their homes more energy efficient and comfortable. On the demand side, people will be more aware of the benefits of green housing, and on the supply side, the affordability of home energy efficiency and green technology is made possible with the provision of loans by local banks, and Romania’s overall energy conservation efforts will benefit. The project has partnered with three financial institutions, and is expected to help 15,000 households to improve energy efficiency, achieve carbon dioxide emissions reduction by 25,000 tons a year, and save 80,000 MWh of primary energy.
To address the waste crisis in Jordan as a result of the dual impact of rapid population growth and the influx of refugees, the TaiwanICDF, through the GESF jointly established with the EBRD, provided loan proceeds to assist Greater Amman import new technology that transforms methane into energy and build new landfill cells to increase waste processing capacity. Moreover, the loan is to help introduce new solid waste processing technology and implement a comprehensive landfill-gas (LFG) recovery system. Through this project, the Government of Greater Amman Municipality has not only established a new solid waste management company, it has also contracted the design, execution and operation of the landfill-gas recovery system to a private sector company based on a build-operate-transfer (BOT) contract. This partnership with the private sector seeks to increase the operational efficiency of the landfill-gas recovery system and will serve as a model for cities across the Middle East.
© International Cooperation and Development Fund Copyright Site Map Subscription Contact Us Privacy and Information Security Policy
TEL: 886-2-2873-2323 Address: 12-15F, No. 9, Lane 62, Tien Mou West Rd., Taipei 111047, Taiwan. | https://www.icdf.org.tw/ct.asp?xItem=12408&ctNode=29857&mp=2 |
(Turkish Journal of Neurology)
The coronavirus disease-2019 pandemic, one of many global threats to human health, provides an opportunity to analyze how to detect, minimize, and even prevent the spread of future viral zoonotic agents with pandemic potential. Such analysis can utilize existing risk assessment techniques that seek formally to define the hazard, assess the health risk, characterize the health threat, and estimate the probability of occurrence.
Transient global amnesia (TGA) is a clinical syndrome characterized by sudden-onset anterograde amnesia, accompanied by repetitive questioning, sometimes with a retrograde component, lasting up to 24 hours, and without compromise of other neurologic functions. Typically, it occurs in individuals aged 50-80 years, with a decreased incidence in younger and older populations. There may be many causes of TGA. Hippocampal ischemia also contributes to the cause of TGA. In this case report, a 67-year-old woman who presented with TGA clinical features accompanied by right hippocampal diffusion-weighted imaging hyperintensity is presented.
Multiple sclerosis (MS) and Parkinson’s disease (PD) are progressive central nervous system diseases that cause significant activity limitation and participation restrictions by causing motor and non-motor symptoms in patients. With this case report, we aimed to present the effects of the game-supported rehabilitation in a patient with co-occurrences of MS and PD that we rarely encounter. A 54-year-old female patient with co-occurrence of MS and PD who was mobilized with a wheelchair was evaluated as a case. The patient was treated for 1 hour, 3 days a week for 8 weeks. After a 30-minute neurophysiologic exercise program, the patient was taken to 30-minute game therapy using the “Smart Physiotherapy Game System (USE-IT)”. USE-IT, a game console developed in line with our clinical experience, is also a TUBITAK 1512 project. On the game console, the patient played six games using different grip materials. Before and after the treatment, diseases levels and findings were evaluated using the expanded disability status scale, the modified Hoehn and Yahr scale, and the unified PD rating scale. Frequency of falling was asked to the patient and relatives, rigidity was determined using manual evaluations, muscle strength was assessed through gross muscle strength assessment, cognitive status was evaluated using the Montreal cognitive assessment scale, posture was evaluated with New York posture rating scale, manual skills were evaluated with the Minnesota manual dexterity test, and fatigue was evaluated with fatigue impact scale. Functional condition was evaluated using the functional independence measurement and quality of life was evaluated with MS quality of life questionnaire and PD questionnaire. As a result, it is seen that there are clinically significant improvements in the severity of disease, fatigue, falling, postural disorders, manual skills, physical, cognitive and emotional state, mobility, activities of daily living and quality of life of the patient.
Pisa syndrome (PS) has been described for the first time as a side effect of neuroleptic treatment in patients with schizophrenia. After its first description, PS was reported in patients on dopamine receptor antagonists, cholinesterase inhibitors, and antidepressants. PS was also associated with neurodegenerative diseases such as Alzheimer’s disease, multiple system atrophy, and dementia of Lewy bodies (DLB). Dopaminergic treatment in Parkinson’s disease (PD) may also lead to PS in PD patients. Here, we report a patient with probable DLB who developed PS after the initiation of piribedil treatment. After cessation of piribedil, PS disappeared entirely. We want to highlight that PS related to dopaminergic treatment may be reversible, and like other dopamine agonists, piribedil has the potential to cause PS in patients with parkinsonism.
Objective: To screen cognitive functions using the Montreal Cognitive Assessment (MoCA) test and to determine the most common central nervous system complications in adults with sickle cell anemia (SCA). Materials and Methods: One hundred adult patients with SCA and 82 healthy controls participated in this study. Controls were matched for age, sex, and education level. We reviewed the demographic information and laboratory values of all patients. The patients were questioned about common CNS complications including headache, ischemic or hemorrhagic stroke, epilepsy, and cerebral venous sinus thrombosis. The MoCA test was used to assess neurocognitive function in all participants. Results: Of the 100 patients with SCA, 38 patients had chronic or recurrent headaches, 10 had a history of depression, and four patients had a history of ischemic stroke. None of the patients had a history of epilepsy, hemorrhagic stroke or cerebral venous sinus thrombosis. The median MoCA score of the patients was significantly decreased compared with that of the control group (p<0.001). MoCA scores below 21 points were observed in 50% of the patients. The MoCA scores were negatively correlated with age but positively correlated with education level (r=-0.181 p=0.015, r=0.483, p<0.001 respectively). There was a significant correlation between a history of chronic or recurrent headaches and lower MoCA (p=0.003). Conclusion: Cognitive impairment was the most prevalent neurologic symptom in Turkish adult patients with SCA. The MoCA test may be a useful and easy screening test to evaluate and follow cognitive impairment. A history of first ischemic stroke during adulthood was observed in one patient. Two patients had severe neurologic sequela findings due to ischemic stroke.
Objective: The retina layer belongs to the end-stream region of the internal carotid artery, and thus various ophthalmic symptoms can present in patients with carotid artery stenosis. The aim of this study was to examine the changes in retinal nerve fiber layer thickness (RNFLT), central macular thickness (CMT), retinal ganglion cell layer (RGCL), and choroidal thickness (CT) in patients who had unilateral (symptomatic or asymptomatic) carotid artery stenosis (CAS) using optical coherence tomography (OCT). Materials and Methods: In this prospective observational study, patients with confirmed unilateral CAS (symptomatic or asymptomatic) in computed tomography angiography were recruited. RNFLT, CMT, and RGCL were compared using spectral domain-OCT. CT was analyzed using enhanced depth imaging- OCT. Results: A total of 28 patients with unilateral CAS (17 asymptomatic, 11 symptomatic) were recruited. There were no significant differences between the eye on the stenotic side and the fellow eye according to RNFLT, CMT, RGCL, and CT in the asymptomatic group (p=0.986, p=0.945, p=0.569, and p=0.796, respectively). Similarly, in the symptomatic group, no significant differences were found between the eye on the stenotic side and the fellow eye according to the same parameters (p=0.693, p=0.409, p=0.792, and p=0.597, respectively). When comparing the eyes on the stenotic sides in both groups, no significant differences were found (p=0.85, p=0.24, p=0.7, p=0.98 respectively). Conclusion: The decrease in retinal artery blood flow did not lead to morphologic or functional changes of the retina in symptomatic or asymptomatic carotid artery disease.
Dysphagia, which is frequently seen in patients with multiple sclerosis (MS) and defined as difficulty in swallowing, can lead to serious complications such as aspiration pneumonia, dehydration, malnutrition, and increases morbidity and mortality rates and decreases quality of life. In patients with MS, dysphagia can be intervened by pharmacologic or surgical methods; this symptom can also be controlled by non-pharmacologic and non-invasive methods such as sensory stimulation techniques, swallowing maneuvers, dietary modifications, and positional swallowing techniques. No previous systematic reviews on the effects of non-pharmacologic or non-invasive methods on dysphagia in MS have been published. The main objective of this study was to summarize and qualitatively analyze published studies on non-pharmacologic or non-invasive methods effects for dysphagia in MS. Within the scope of the study, a detailed literature review was performed and four studies were examined considering the inclusion criteria. The non-pharmacologic applications in the studies are as follows: Traditional dysphagia rehabilitation methods, which include methods such as oral motor exercises and swallowing maneuvers; electrical stimulation, and respiratory muscle exercises. In these studies, dysphagia and/or swallowing-related quality of life were measured with similar scales. It is seen that all of the related interventions have a significant effect on dysphagia and/or swallowing-related quality of life in patients with MS. In conclusion, in light of the information in the literature, non-pharmacologic methods can be said to be effective in the control of dysphagia in patients with MS. In addition, it may be suggested to conduct experimental and more comprehensive studies in this field.
Objective: Neurological manifestations associated with coronavirus disease-2019 (COVID-19) are broad and heterogeneous. Although the predominant clinical presentation is respiratory dysfunction, concerns have been raised about the neurological hallmarks. Many reports suggest some findings on electroencephalography (EEG) can be relevant to COVID-19. Materials and Methods: Patients with COVID-19 admitted to hospital and referred for EEG from March 1, 2020 to February 15, 2021, were retrospectively enrolled. When research databases were queried with the terms “COVID-19 (ICD code:10: U07.3) and “EEG”, total number of patients obtained was 32. Number of patients excluded due to unconfirmed diagnose with COVID-19 was 12. Twenty adult patients with certain diagnose of COVID-19 who underwent 21-electrode routine EEG during the outbreak with neurological deterioration were identified. Results: Background abnormalities was evident in one of fourth patients (n=5, 25%). Mild diffuse slowing (n=3, 15%) and focal slowing (n=3, 15%) with left frontotemporal tendency (n=2, 10%) were observed. Epileptiform abnormalities and seizures were detected showing focal (n=4, 20%) or generalized onset (n=1, 5%). Conclusion: Here we performed a retrospective single-centre study to evaluate the electroencephalographic findings in patients diagnosed with COVID-19 since it remains unknown. it needs to be more clarified with increasing number of recordings. | https://app.trdizin.gov.tr/dergi/TlRBMk5nPT0/turk-noroloji-dergisi |
I think we will all look forward to the time when we can get together and show our various views of this subject - something Sally P. couldn't have known when she set it, would have a much larger resonance with us all this year!
| |
Spurred on by Sally's boots, I decided to draw a pair of my own shoes. I chose a pair I have had for over 30 years, and don't wear very much now. They look much shinier in this scanned version of the drawing - not sure why! I think shoes acquire character from the feet that wear them, so I'm not sure what these say about me. I am going to have a go at drawing some basketball boots next - they'll say something else altogether!
| |
Carolyn has stepped out of her comfort zone and sent me this drawing of Sally P's husband Mick, from the photo last week. I think there is definitely a likeness there, so keep practicing portraits, Carolyn!
| |
Lesley says,' I’ve been working in the garden this week so not much time for being creative, unless you count moving trellis and paths and cutting back over enthusiastic rambling roses....
So all I have done is add some sheep and posts and worked on the rock outcrops in last week’s landscape.' She says she still has work to do on the rocks to make them more separate from one another. Coming along nicely, though!
| |
Jane sparked a discussion when she spoke about her drawing this morning on Zoom. Her experience was how different it was to work just from a photo, with no means of walking round the subject or looking at exactly how one plane joins another, or what is around the side beyond your view. She also felt that the resultant drawing lacked something of the life of the subject. The artists who compete on Portrait Artist of the Year (currently on Sky Arts and available to watch on Freeview, but you may have to rescan to find it) often use an iPad. They photograph the subject from different aspects, and use the image on their iPad as an aid to painting from life. The consensus was that it is easier to work from a photo if you know the subject, than if you don't.
I also used a long stick and attached brush pens, then water, it was definitely loose, not so sure it worked but it is good to try.'
'Three pics, half hour sketch of boots with watercolour wash, hour drawing of Mick in pencil and a pastel picture of Mick that I still need to work on. Not sure who the model is!'
Stay well everyone. | http://www.eyeartsguild.org.uk/blog/12th-november-2020 |
The Hundred Years' War, a conflict between England and France, actually lasted 116 years. It began in 1337 and ended in 1453, although there were long periods of truce or low-level fighting during that time.2) Which country makes Panama hats?
Panama hats are made exclusively in Ecuador and are woven by hand from a plant called the Toquilla.
3) From which animal do we get catgut?
usually sheep - sometimes horses
4) In which month do Russians celebrate the October Revolution?
On November 7, 1917, Bolshevik leader Vladimir Lenin led his leftist
revolutionaries in a nearly bloodless uprising against the ineffective
Kerensky Provisional Government. As Russia was still using the Julian calendar at the time, period references show an October 25 date, and it's still called the October Revolution
5) What is a camel's hair brush made of?
A Camel hair brush can be made of ox, goat, squirrel, pony, or any
variety of other natural animal hairs. Camel is the name of the person
who invented the Camel hair brush.
6) The Canary Islands in the Atlantic are named after what animal?
Dogs (Canares, from the Latin, meaning dogs)
7) What was King George VI's first name?
Albert Frederick Arthur George
8) What color is a purple finch?
Crimson
9) Where are Chinese gooseberries from?
New Zealand, the U.S., Europe, and Chile. Chinese Gooseberries are
native to China but are now grown commercially in other countries. In
Chile, these fruits were initially called Chinese Gooseberries, but
they are now more widely known as kiwis, which comes from the kiwi
bird, the national bird of New Zealand.
10) What is the color of the black box in a commercial airplane?
Its orange. Orange makes the 'black' box easier to locate in the event
of an air crash than, say, had it been actually black.
Poul_11Senior Member
1) How long did the Hundred Years War last?
116 years
2) Which country makes Panama hats?
On November 7, 1917, Bolshevik leader Vladimir Lenin led his leftist revolutionaries in a nearly bloodless uprising against the ineffective Kerensky Provisional Government. As Russia was still using the Julian calendar at the time, period references show an October 25 date, and it's still called the October Revolution
5) What is a camel's hair brush made of?
A Camel hair brush can be made of ox, goat, squirrel, pony, or any variety of other natural animal hairs. Camel is the name of the person who invented the Camel hair brush.
New Zealand, the U.S., Europe, and Chile. Chinese Gooseberries are native to China but are now grown commercially in other countries. In Chile, these fruits were initially called Chinese Gooseberries, but they are now more widely known as kiwis, which comes from the kiwi bird, the national bird of New Zealand.
10) What is the color of the black box in a commercial airplane?
Disclaimer: All Logos and Pictures of various Channels, Shows, Artistes, Media Houses, Companies, Brands etc. belong to their respective owners, and are used to merely visually identify the Channels, Shows, Companies, Brands, etc. to the viewer. Incase of any issue please contact the webmaster.
| |
The research of the mathematical neuroscience lab sits at the frontiers between biology and mathematics. In the team, we work with experimentalists (and some members also perform their own experiments) we build up models and analyze them mathematically.
Currently, we develop several research directions.
The topological organization of the cortex
We also address the functional organization of the cortex using biological experiments and modeling, in order to understand their tropology and how they subtend visual perception. Functional maps are characterized with optical imaging techniques (intrinsic
signals or voltage-sensitive) and electrophysiology.
In order to understand the role of visual experience in shaping functional maps, these are recorded during development, or in pathological conditions of vision (blindness, strabismus, orientation deprivation..)
The functional networks subtending these maps are characterized with dye injection combined with optical imaging or electrophysiology, or are inferred through mathematical models.
Large-scale neuronal networks
In order to understand the emerging properties of large neuronal networks, we analyze the activity of large neural assemblies. We further developed stochastic analysis methods for such equations taking into account the specificity of cortical networks, in particular their topology, spatial extension, and resulting space-dependent delays. In order to understand the role of noise and heterogeneity, we reduced these equations in a particular model (Wilson-Cowan system) in which the dynamics reduces to a simpler deterministic dynamical system. We thus evidenced a surprising phenomenon: noise and heterogeneity govern the qualitative properties of the macroscopic solutions, inducing in particular the emergence of synchronized periodic activity.
Hybrid Dynamical Systems and Single Cells Dynamics
Neurons display a continuous nonlinear dynamics interspersed with discrete events, called spikes, that are meaningful events transmitted to the connected neurons. This structure lead us to analyze hybrid dynamical systems coupling continuous dynamics (the excitable membrane potential) and discrete phenomena (spike emission). We have developed a thorough analysis of the nonlinear bidimensional integrate-and-fire neurons. This lead us to introduce a new, versatile model of neuron, the quartic integrate and fire neuron, supporting subthreshold sustained oscillations. The spike dynamics was rigorously analyzed through the introduction of a specific firing map, linking the spike pattern emitted to the excitable dynamics. We also investigated the well-posedness properties of these models and the precision of numerical simulations for such systems. | https://blogs.brandeis.edu/mathneuro/research/ |
By Dr. Sami Bahri
At a lean management conference, I met a Toyota executive whose business card said “Jamie B…, Vice president of X.” He had another business card: “Jamie B…, President of Y”. I wondered why he occupied two different positions. “It is very usual for us to have several jobs at Toyota,” he said. I learned later that some Toyota employees go through four different jobs every day – they are cross trained for every one of them.
Imagine the difference between Jamie’s situation and the way my office functioned before we applied lean management. We had one assistant per room. If her patient showed up, she would work; if not, she would wait for the next one. She would help with the other assistant’s patient, only if we asked – and that was not considered part of her job. Assistants, hygienists and front desk personnel were three different groups that never mixed.
The situation was not that extreme all the time, but we certainly went through periods where functions were firmly separated. From an organizational standpoint, having a clear function separation certainly feels neatly organized. But it has a flaw that makes it very costly.
Why is the clear separation of functions very costly?
That separation makes equalizing the work load among employees almost impossible. Unless the workload is equalized, however, some people will be idle when others are busy. The busy ones feel that the rhythm is hectic; but if we can pass some of the load to the idle ones, everyone will be working slower.
However, passing some of the load is not a manager’s spontaneous reaction to overload. The natural reaction is usually to hire more assistants if the assistants are too busy, or more office employees if the front office is too busy. As we will see, hiring more people makes things worse.
How can hiring additional employees harm productivity?
Let’s say your assistants became overwhelmed. You hire a new assistant to alleviate the pressure, which in fact reduces the load on the rest of the assistants and they feel relieved. After some time, however, you find that the excess demand has been absorbed, and you are running out of work for the extra assistant. You also find that the increase in overall production was proportionately smaller than the amount you paid for her salary.
The new assistant has now some time on hand and all the team members start looking for a solution to her idleness. Fairness is on their mind! And to them, fairness means that her workload should not be smaller than theirs. We have experienced the following scenarios over time:
- The newcomer tries to look busy in order to justify her presence at the office. She then starts doing unneeded work. She spends recourses that otherwise would have been available to treat patients.
- The office manager gives everyone less work – she distributes the idle time. Everyone slows down and gets used to a slow pace. Then, when demand picks up again, it will be difficult to bring the team back up to the previous speed.
- The “hide and seek” game: If you have one assistant and one task, she will manage it properly. However, when you have two assistants and one task, they will play “hide and seek” until one of them is caught. As soon as she starts to help a patient, the other assistant becomes more visible. Again, when work goes back to a normal pace, it will be difficult to bring the team back to speed.
As you see from those examples, an excessive number of employees can cause a large amount of process waste. As you added a new employee, the percentage of production devoted to salaries increases and you realize that giving salary raises will become more difficult. Consequently, keeping employees for a long time will become more challenging.
Therefore, it is in everyone’s interest to find solutions that improve efficiency by reducing the load on the actual team members. This is achieved through waste elimination. Such solutions will allow the current team to handle increased productivity without hiring new employees, and without having to work harder.
How to handle increased demand without hiring additional employees?
Two of the most effective solutions to absorb demand increase are leveling and cross training. Leveling was covered in a previous article, so let’s talk about cross training.
Not every employee needs to become an expert
First, let us make it clear that we are not trying to train every employee to the point where they are all experts at every job. In football, for example, we would not try to make every player a quarterback; that would be time consuming, unnecessary and even counterproductive. What we are trying to do can be explained by this example from Toyota.
Imagine a U-shaped cell when you plan the flow of an operation
One of the priorities in the Toyota Production System (TPS) and in Lean Management is flexible staffing – matching the fluctuations in demand by changing the number of employees involved.
As one way to attain that flexibility, Toyota’s engineers have arranged the workstations in U-shaped cells where machines are organized in the sequence of work. If demand is low, they use one employee to assemble the product by walking it from station to station until the product exits at the end of the cell. If demand increases, they will bring in additional employees from a different section to help with the load, until the production pressure is dissipated.
As you imagine a U-shaped cell, you can certainly see how the beginning and the end of the process are located in the same area. You can place one expert in that area, who will control the entire assembly process, from entrance to exit. That expert should have the knowledge and the skills to make decisions and execute them. The rest of the stations in the cell are filled with newer employees who are not able to make decisions by themselves, but have received enough training to carry out instructions given by the experts.
That is exactly what we try to achieve (it is also the main point of this article): We try to have one expert in each area of the operation, and train the rest of the staff until they can execute the solutions suggested to them by the experts.
How does that apply to dentistry?
The easiest example is probably how assistants in our office file insurance claims from the operatory. Sometimes they stumble on a complex case where they need help from the insurance coordinator. They just ask her for guidance; when she gives them the answer, they can apply it because they have received enough training in that field.
What is an easy way to cross train your employees?
Some cross-training techniques have worked for us. They will not necessarily work for you, but once you learn the following principles you will find your own way.
- Standardize to eliminate the need for cross training. While we advocate cross training, we advocate eliminating the need for it. That’s because cross training is a tool – a means – not a goal. The goal is cost reduction through waste elimination. Any effort, cross training included, not directly involved in patient treatment, is a candidate for elimination. Every time you plan a process, one of your main objectives should be to reduce the amount of labor it requires. The goal is to keep the staff available for value-added work. We standardized the room replenishment process, for example, until a person who received no training at all could replenish the supplies very quickly and accurately.
- Keep formal training to a minimum. Adults learn by doing. Classroom training has little effect and a small return on investment. In a classroom setting, we prefer to teach only the basics that allow for a common language.
- Intensify on-the-job training through coaching. You certainly know the difference between a teacher and a coach. A teacher gives you the information; a coach will make sure that you know how to apply it correctly. We need to become coaches, watching over people as they are performing their work, guiding them to avoid mistakes, helping them to develop their thinking and working habits. Are you worried that you might teach them and they might leave? Well, as my friend Orrest Fiume, author of “Real Numbers” said: “You shouldn’t worry if you teach them and they leave, you should worry if you don’t teach them and they stay!”
- Make every moment a coaching moment. As I am treating patients, I explain what I am doing and why to the assistant, and sometimes to the patient. Sometimes patients think that the assistant should already know what we are doing and they would lose trust if I explain it to her. In those cases, I give the patient a mirror and explain to them what I want the assistant to hear. This way, the assistant gets the training without the patient even noticing.
- Manage busy times differently than slow times.
a. Busy times: When you are busy, you have less time for training. You want to give a task, to the most qualified employee.
b. Slow demand times: When the schedule slows down a little, you can call your newest assistant to help you so you can train her. A good idea would be to have the experienced assistant stand behind her and coach her with every step to make sure she gets a quick and precise training.
The main goal from cross training is to equalize the workload among the team members. That equalization allows performing more procedures with the same number of people. This, in turn, guarantees that you won’t need to hire more people and that you will keep your current employees longer. If you like to research the subject of equalization, it is called Shojinka in Japanese, and the word was adopted by English speaking lean leader.
Shojinka is the equalization of the load that remains after leveling the schedule. While Shojinka means distributing the work evenly among people, leveling means distributing the work evenly over time. Leveling assumes that you have control over when to schedule an appointment.
However, no matter how hard you try to level – distribute the load evenly over time, you might find that the schedule gets uneven from time to time. That is when Shojinka – equalizing the workload among people – comes into play, allowing you to utilize your resources with flexibility. If you would like to research leveling for yourself, you could also research its Japanese name, Heijunka. Heijunka has also been adopted in English. The combination of Heijunka and Shojinka is very effective in boosting productivity. | https://www.dentalgrouppractice.com/cross-training-done-right-maximize-efficiency-and-profits.html |
The Parent-Sibling-Primary pattern is a combination of both the Parent-Primary pattern and the Sibling-Primary pattern.
The Primary Domain test data generation is dependent on the Parent Domain test data generation. Each time the Parent Domain completes an iteration of test data generation, the Primary Domain is called to complete one or more iterations of its test data generation with respect to the Parent Domain.
At the same time, the Primary Domain test data generation is also dependent on its Sibling Domain’s test data generation. Each Sibling Domain completes one iteration of test data generation before the Primary Domain completes one or more iterations of its test data generation.
The Primary Domain can have six combinations of test data generation with respect to a combination of both the Parent-Primary pattern and the Sibling-Primary pattern.
One to One (1-1) Parent-Primary Pattern + One (1) Sibling-Primary Pattern: For each generated record of the Parent Domain, generate exactly one record for the Primary Domain. Before each Primary Domain record is generated, one record from the Sibling Domain is generated.
One to One (1-1) Parent-Primary Pattern + Many (M) Sibling-Primary Pattern: For each generated record of the Parent Domain, generate exactly one record for the Primary Domain. Before each Primary Domain record is generated, one record from each Sibling Domain is generated.
Fixed One to Many (1-N) Parent-Primary Pattern + One (1) Sibling-Primary Pattern: For each generated record of the Parent Domain, generate exactly N records for the Primary Domain where N is a constant greater than 1. Before each Primary Domain record is generated, one record from the Sibling Domain is generated.
Fixed One to Many (1-N) Parent-Primary Pattern + Many (M) Sibling-Primary Pattern: For each generated record of the Parent Domain, generate exactly N records for the Primary Domain where N is a constant greater than 1. Before each Primary Domain record is generated, one record from each Sibling Domain is generated.
Dynamic Zero to Many (0-M) Parent-Primary Pattern + One (1) Sibling-Primary Pattern: For each generated record of the Parent Domain, generate M records for the Primary Domain where M is a randomly generated number between 0 and N. Before each Primary Domain record is generated, one record from the Sibling Domain is generated.
Dynamic Zero to Many (0-M) Parent-Primary Pattern + Many (M) Sibling-Primary Pattern: For each generated record of the Parent Domain, generate M records for the Primary Domain where M is a randomly generated number between 0 and N. Before each Primary Domain record is generated, one record from each Sibling Domain is generated.
The Pseudo code below shows the six sequences in which test data is generated for the Parent-Sibling-Primary pattern.
The following is a concrete example where the User Domain is the Primary Domain, the Department Domain is the Parent and the Security Level is the Sibling Domain. | https://genrocket.freshdesk.com/support/solutions/articles/19000034324-parent-sibling-primary-test-data-generation-design-pattern |
Second-graders have the skills to start delving more deeply into scientific investigations, according to educational experts at PBS Parents. In earth science, physical science and life science, children in second grade can create an array of science projects that provide them with opportunities to explore the world around them, connect math to science, communicate their ideas, use the tools of the trade and explore the scientific process all while getting hands-on experience.
1 Earthy Adventures
Sedimentary rocks offer a top-tier option for a winning science project in the earth sciences. Second-grade students can learn how sedimentary rocks form, by layers of different particles condensing, with this educational experiment from the Science Buddies website. Collect natural materials such as small pebbles, sand or soil, and pour them into a clear plastic bottle. Add enough water to cover up the materials, capping it off and allowing the water to evaporate. have your child explore and identify the different layers of sediment that form as a result. Another option is to sift out a handful of dirt using a strainer and running water. Ask the student to predict what will stay in the strainer and what won't. Take a look at the sediment that remains, having your child record what he sees or take photos.
2 Let's Get Physical
Second-graders still can conduct physical science experiments like pros, even if they aren't as developmentally ready to work with chemicals as high school or college students. PBS Parents suggests using the kitchen as a backdrop for physical science projects that early elementary school children can use to explore reactions, changes and transformation. Instead of just watching ice freeze, add a few drops of food coloring to make the experiment more exciting. Have the child predict what will happen when the color drops into the water in an ice cube tray, also asking what she thinks will happen after the tray spends a few hours in the freezer. You also can try a simple reaction experiment, using basic kitchen items such as vinegar, water, cornstarch, baking soda and lemon juice. Have your student mix the different substances, predicting how they will react and then making observations.
3 The Circle of Life
Second-grade students can explore the life cycle with a plant or animal project. According to the Science Project Lab website, second-graders can explore how seeds spread and germinate with an at-home activity. Collect or buy seeds or use a seedy dandelion. Have the child act like a bird dropping seeds into a pot of soil or blow a fluffy dandelion that's gone to seed over a planter of dirt. Water the seeds regularly and record the growth. Another option is to study the life-cycle of a butterfly by starting with a caterpillar and observing it as it goes through a metamorphosis. Have the child take or draw pictures during each step in the cycle to document the activity.
4 Egg Drop
Explore the effects of gravity with your second-grader during an egg-drop experiment. This time-tested activity allows children to better understand the impact that gravity has, while encouraging critical thinking and problem-solving skills. Brainstorm ways to pad the egg for its drop. For example, the Weird Science Kids website suggests using plastic baggies with rice cereal as egg pillows or taping foam cups around the egg. Have an adult drop the egg -- in its contraption -- from a deck or a similar high space. The child can check the egg to see how it fared, or you can try a few different drops from varying heights to make a comparison. | https://classroom.synonym.com/winning-science-projects-2nd-graders-32299.html |
Asuka added to WWE shows in Japan
NXT Women’s Champion Asuka has been added to the WWE live events in Japan on June 30 and July 1 at the Ryugoku Sumo Hall in Tokyo, Japan. Asuka will be replacing Mickie James on the cards, teaming with Bayley and Sasha Banks against Alexa Bliss, Nia Jax and Emma.
WWE PSA on seatbelts
WWE posted the following PSA video on Thursday featuring WWE US Champion Kevin Owens for the new “Click It or Ticket” campaign for wearing seatbelts while driving. | https://soccer.wrestleview.com/wwe-news/81761-asuka-added-to-wwe-shows-in-japan-wwe-psa-on-seatbelts/ |
Using a natural peanut butter produces a slightly drier dough but equally delicious cookies. Any peanut butter can be used.
Ingredients:
- 1 1/2 cups (375 mL) natural peanut butter
- 1/2 cup (125 mL) margarine or butter
- 3/4 cup (175 mL) lightly packed brown sugar
- 1/2 cup (125 mL) granulated sugar
- 1 large egg
- 1 tsp (5 mL) vanilla
- 1 cup (250 mL) oat flour
- 1/2 cup (125 mL) all-purpose flour
- 1/3 cup (75 mL) ground flaxseed
- 1 tsp (5 mL) baking soda
- 1/2 tsp (2 mL) salt
- 1 cup (250 mL) old fashioned oat
- 1/3 cup (75 mL) chopped roasted peanut
- 1/3 cup (75 mL) dried cranberrie
Instructions:
- Preheat oven to 375° F (190° C) and position rack in center of oven.
- In a large bowl, cream peanut butter, margarine, brown sugar, and granulated sugar until light and fluffy. Beat in egg and vanilla.
- In a small bowl, combine oat flour, flour, ground flaxseed, baking soda and salt. Stir and blend into creamed mixture. Add oats, peanuts and cranberries. Combine until all ingredients hold together to form a dough.
- Scoop dough using 1 Tbsp (15 mL) measure, slightly rounded on top. Press and squeeze dough in palm of your hand to form into 1 1/2 inch (3.5 cm) balls. Place 2 inches (5 cm) apart on baking sheet that has been lightly sprayed with a nonstick cooking oil. Flatten cookies with the back of a floured fork, making a criss cross pattern.
- Bake 10 minutes or until golden brown. Let cool 5 minutes on baking sheet. Remove cookies to cooling rack.
Yield: 44 cookies
Serving Size: 2 cookies. Each serving contains 1/2 tsp (2 mL) of flax.
Cook's Notes:
- To make oat flour: In a small blender or coffee mill, process oats until finely ground.
- Cookies can be stored in an airtight container for up to 3 days or frozen up to 3 months. | https://www.saskflax.com/health/peanut-butter-cookies |
1. Create an impact diagram with sections: “Confirmation”, “Innovation” and “Scenarios” on a large piece of paper.
2. Make teams of two or three people.
3. Shuffle the cards among the teams evenly. You might like to make a pre-selection of cards if you’re short on time.
4. Each team should position their cards in the appropriate sections on the impact diagram.
5. Discuss, as a team, which pattern of change is applicable to each of the cards. Write them down on separate pieces of papers and place them next to the cards. You should have now agreed on one pattern of change for each card on the table that gives you a complete overview of the current situation.
6. Once all the cards are positioned, group the cards into categories. Assign a new pattern of change to the categories that is based upon the individual cards within that category. The category and the the main pattern of change combined can be defined as a driving force.
7. By now you should have identified the market drives, innovation opportunities and key uncertainties.
8. As a whole group discuss how these drivers influence your business.
– innovation – Which business opportunities arise out of the drivers of change? Is your company currently addressing those issues?
– scenarios – Which future scenarios can you derive from the drivers of change? Is your company currently aware of those scenarios?
– During step 4: after you have positioned the STEEP cards add forces from the meso- (transactional) environment.
This workshop is included in the deck of cards and is also available in Dutch, German and Spanish.
1. Select the cards that are relevant to the business case.
2. Shuffle the cards and deal them among the players.
3. In turns a player places a card on the impact diagram. The player names the pattern of change and explains why he put the card there.
4. After each player’s turn, check whether the cards on the impact diagram need to be re-positioned as a result of the discussion.
5. When all cards have been played, identify together the key-uncertainties that were used to create future scenarios. | https://ivto.org/scenario-planning-workshops-with-foresight-cards/ |
How do I work with Jasmine Greene?
You can start working with Jasmine Greene in three simple steps:
- Invite Jasmine to respond to your new or existing project.
- Review Jasmine's proposal, portfolio, and quote after they've replied to your invitation.
- Hire Jasmine Greene.
You can learn about how Voices works here.
How do I pay Jasmine Greene?
Payment to Jasmine Greene is managed through our SurePay™ payment protection service in a few simple steps:
- Pay securely by Visa, Mastercard, or PayPal directly through Voices.
- Voices holds your funds until you're satisfied with Jasmine's work.
- When the work is done, download your files.
What voice over skills does Jasmine Greene perform? | https://www.voices.com/profile/jasminelgreene |
Create an Account - Increase your productivity, customize your experience, and engage in information you care about.
The Federal Emergency Management Agency and Florida Division of Emergency Management have received the following application for Federal grant funding. Final notice is hereby given of the Federal Emergency Management Agency’s (FEMA) consideration to provide funding in the form of the Hazard Mitigation Grant Program. Funds will be provided in accordance with Section 404 of the Robert T. Stafford Disaster Relief and Emergency Assistance Act, Public Law 93-288, as amended.
Under the National Environmental Policy Act (NEPA), federal actions must be reviewed and evaluated for feasible alternatives and for social, economic, historic, environmental, legal, and safety considerations. Under Executive Order (EO) 11988 and EO 11990 FEMA is required to consider alternatives to and to provide public notice of any proposed actions in or affecting floodplains or wetlands. EO 12898 also requires FEMA to provide the opportunity for public participation in the planning process and to consider potential impacts to minority or low-income populations.
Funding for the proposed project will be conditional upon compliance with all applicable federal, tribal, state, and local laws, regulations, floodplain standards, permit requirements, and conditions.
Applicant:
City of Panama City
Project Title:
HMGP-4565- 014-R
Location of Proposed Work:
The area affected by this project consists of homes in the following locations:
Proposed Work and Purpose:
The proposed project will replace the sanitary sewer manhole covers and rings with water-tight units to eliminate the flow of surface stormwater from entering the manholes.
Project Alternatives:
The alternatives to the project that have been and will be considered are 1) the no-action alternative and 2) increasing the size of the existing lift stations. These alternatives to the proposed project are not viable because under Alternative 1) repetitive sanitary sewer spills and sanitary sewer backups are not alleviated, and the needs of the community would not be served; Alternative 2) is cost-prohibitive, and therefore not practicable.
Comment Period:
Comments are solicited from the public; local, state or federal agencies; and other interested parties in order to consider and evaluate the impacts of the proposed project. The comments should be made in writing and addressed to the Florida Division of Emergency Management, Mitigation, 2555 Shumard Oak Blvd., Tallahassee, FL 32399-2100. These are due within 15 days of this notice. The State will forward comments to applicable regulatory agencies as needed. Interested persons may submit comments, obtain more detailed information about the proposed action, or request a copy of the findings by contacting: | https://www.panamacity.gov/CivicAlerts.aspx?AID=507 |
Conversion of methane-derived carbon and microbial community in enrichment cultures in response to O2 availability.
Methanotrophs not only play an important role in mitigating CH4 emissions from the environment, but also provide a large quantity of CH4-derived carbon to their habitats. In this study, the distribution of CH4-derived carbon and microbial community was investigated in a consortium enriched at three O2 tensions, i.e., the initial O2 concentrations of 2.5 % (LO-2), 5 % (LO-1), and 21 % (v/v) (HO). The results showed that compared with the O2-limiting environments (2.5 and 5 %), more CH4-derived carbon was converted into CO2 and biomass under the O2 sufficient condition (21 %). Besides biomass and CO2, a high conversion efficiency of CH4-derived carbon to dissolved organic carbon was detected in the cultures, especially in LO-2. Quantitative PCR and Miseq sequencing both showed that the abundance of methanotroph increased with the increasing O2 concentrations. Type II methanotroph Methylocystis dominated in the enrichment cultures, accounting for 54.8, 48.1, and 36.9 % of the total bacterial 16S rRNA gene sequencing reads in HO, LO-1, and LO-2, respectively. Methylotrophs, mainly including Methylophilus, Methylovorus, Hyphomicrobium, and Methylobacillus, were also abundant in the cultures. Compared with the O2 sufficient condition (21 %), higher microbial biodiversity (i.e., higher Simpson and lower Shannon indexes) was detected in LO-2 enriched at the initial O2 concentration of 2.5 %. These findings indicated that compared with the O2 sufficient condition, more CH4-derived carbon was exuded into the environments and promoted the growth of non-methanotrophic microbes in O2-limiting environments.
| |
Pets at Home claims it has successfully broken the world record attempt for the amount of dogs that have been washed within 12 hours.
Across the Midlands, 5,613 dogs were washed, which broke the previous record of 5,000.
Sarah Jones, salon manager at Pets at Home in Cannock, said that the Cannock store saw 120 dogs washed and for every dog, customers were charged £5 each. All proceeds are being donated to Support Adoption For Pets, which helps unwanted pets across the UK.
Ms Jones said: “We’re just thrilled, really thrilled. The fact that we had so many people taking part all over the region was incredible and especially because we raised money for a good cause.”
Pets at Home is waiting for confirmation from the Guinness World Records, but said it is hopeful that the record has been broken. | https://www.overthecounter.news/news/pets-at-home-claims-world-record-for-dog-washing.html |
The DP80 series has been discontinued. Please see the DP25B-S as a possible alternative or contact our Pressure, Strain and Force Engineering department.
The OMEGA DP80 Series digital strain gage indicators demonstrate impressive performance for wide compatibility with lower output, bonded foil type transducers, as well as higher output semiconductor type transducers. One of four input ranges are selectable within each instrument to optimize the sensitivity. The front panel membrane keypad facilitates accurate digital scaling, eliminating any dip switches, pot adjustments or reference standards. Three alphanumeric LEDs are included for engineering unit labels (e.g. psi, KG, LBS) or can be assigned as "0" for active display of dead zeros. An additional feature of the DP80 is the inclusion of a primary and secondary engineering units display, both of which easily be selected at the front panel. The secondary display can be any mathematical equivalent to the primary display. System calibration can be accomplished using 3-point live load (actual weight on scale) or by entering the appropriate transducer sensitivity (mV/V).
Cutout:68 H x 274 mm W (2.7 x 10.7") .
참고: Model DP87 can accommodate one additional option. Model DP88 and DP89 can accommodate up to four additional options. | https://kr.omega.com/pptst/DP87.html |
Turning up the Heat: New Report Highlights Impacts of Climate Change on Commercial Fisheries in Atlantic Canada and Eastern Arctic
The global climate and the ocean are closely linked in a number of ways, and the ocean is a key part of what makes life on Earth liveable. The ocean is responsible for every second breath of air we take. It’s also helped slow down climate change: the ocean has absorbed most of the excess heat caused by greenhouse gas emissions, and it takes carbon dioxide out of the atmosphere.
But our gain has come at a cost to the ocean, to the creatures that live within it, and to the communities that rely closely on the ocean for their livelihoods. Rising water temperatures and acidification caused by excess CO2 are disrupting marine ecosystems, with consequences for all aquatic life—including the species that support commercial fisheries.
Oceans North’s newest report looks at the impacts of climate change on commercial fisheries in Atlantic Canada and the Eastern Arctic. The report finds that climate change is having and will continue to have effects on the distribution, yield, and productivity of fisheries throughout these regions.
Some of the impacts that are already occurring, based on results from scientific studies and some corroborated by fishers’ observations, include:
- Warmer water temperatures, in particular in the Gulf of Maine, the Gulf of St. Lawrence and the Scotian Shelf, which can lead to reduced oxygen levels, species migrating to more northern areas, and an increase in invasive species.
- Earlier sea ice melting, impacting the timing of phytoplankton blooms and in turn the spawning of commercially caught species.
- Decrease in overall size of most species.
- Impeded growth of shrimp, lobster and phytoplankton (particularly those whose skeletons are made of calcium) due to ocean acidification.
- Increase in vulnerability to disease.
The report also examines the extent to which Fisheries and Oceans Canada is incorporating climate change into its fisheries-management decisions. It finds that while climate variables are increasingly incorporated into stock assessments, the management plans and related quota decisions have yet to integrate the impacts of climate change into how fisheries are managed. To address this gap, the report makes several recommendations for how to make commercial fisheries more resilient in the face of climate change. Some of the recommendations include:
- Develop a national fisheries and climate framework that clearly identifies a transparent and accountable process for how climate information can go from data to decision-making.
- Assess the opportunity for nature-based solutions to climate change in the marine environment, which will reduce impact of climate change on fisheries by maintaining and restoring sequestered blue carbon.
- Reduce non-climate stressors on Canadian fish populations, including making precautionary decisions around quotas.
- Implement more adaptive fisheries management measures.
- Complete climate vulnerability assessments of Canadian and transboundary fish species.
- Enhance opportunities for ecosystem monitoring and data sharing between departments and stakeholder groups.
A summary version of the report is available here, and you can read a full version of the report here.
Alex Tesar is a communications specialist at Oceans North. | https://www.oceansnorth.org/en/blog/2021/05/turning-up-the-heat-new-report-highlights-impacts-of-climate-change-on-commercial-fisheries-in-atlantic-canada-and-eastern-arctic/ |
PROBLEM TO BE SOLVED: To provide a method and system for providing a consumer aggregation service in a network service provider.
SOLUTION: At first, when a user registers himself or herself in a consumer collecting service 110, the consumer collecting service 110 replaces the identification information of the registered user 134 with its own identification information when the registered user 134 reads a world wide web(WWW) site. Moreover, the consumer collecting server 110 intercepts an electronic merchandise order made by the registered user 134 with a retailer 124 though a network 100, and charges the registered user 134 with the order, and carries out the order with the retailer 124 on behalf of the registered user 134, and makes the retail agent 124 charge the consumer aggregation service itself with the order. Moreover, the consumer collecting server 110 collects coupons or bonuses from the retailers 124 based on shopping performed by the registered users 134, and stores them in a data base, and distributes the collected coupons or bonuses to the registered users 134 in a prescribed method.
COPYRIGHT: (C)2002,JPO | |
Disclosed is an apparatus for sensing whether a wearable device is worn. The apparatus comprises: a housing combined to a wearable device of a user; a processor included in the housing; a battery included in the housing; and a conductive fastening unit. The battery supplies power to the processor. The processor measures capacitance between the conductive fastening unit and a body of the user. The capacitance is proportional to the area of the conductive fastening unit and inversely proportional to a distance between the conductive fastening unit and the body of the user.
COPYRIGHT KIPO 2020 | |
Owls do more than just hoot this summer
The importance of imaginative thinking in a child’s development is vital to their growth as learners, dreamers, and doers. Whether we use our imagination to map out potential futures or escape somewhere made of magic and mystery, these journeys through thought and play open experiences for children that had previously existed only in their dreams. This summer, we are thrilled to invite your children to join us for Book Nook’s World of Imagination. We will explore the places and things that help to shape the wonders of childhood by implementing books, art, music, academics, and more over the course of our six-week program. Each theme will include a lesson that enhances the scope of whatever world we’re visiting that week, be it based in reality or fantasy. Your child will bring home a folder each week of the completed curriculum, along with various projects and treats that correlate to the theme.
SESSION ONE: JUN 28-JUL 30
WEEK 1: DOWN BY THE BAY
WEEK 2: GO FETCH
WEEK 3: HAPPILY EVER AFTER
WEEK 4: SOMEWHERE OVER THE RAINBOW
WEEK 5: SHAKE, RATTLE, ROLL
SESSION TWO: AUG 2-SEPT 3
WEEK 1: ALL ABOARD!
WEEK 2: CIRCUS
WEEK 3: AROUND THE WORLD
WEEK 4: UP, UP, AND AWAY
WEEK 5: TROPICAL
Summer Schedule & Fees
2021 Summer Reading Program Schedule
5 OR 10 WEEK SUMMER SESSION
- 3 DAY OR 5 DAY OPTION
3 DAY OPTION (T/W/TH). 5 DAY OPTION (M-F)
$4050 | 10 WEEKS $6750 | 10 WEEKS
$2025 | 5 WEEKS $3375 | 5 WEEKS
$465 | WEEKLY $775 | WEEKLY
5-WEEK SESSION ONE
JUNE 28-JULY 30
9:30-11:30 | Ages 2.5-3.5
1:00-3:00 | AGES 3.5-5
5- WEEK SESSION TWO
AUG 2-SEPT 3
9:30-11:30 | Ages 2.5-3.5
1:00-3:00 | AGES 3.5-5
ENRICHMENT CLASSES - 1 or 2 DAY OPTIONS (M-F)
AGES 18MOS-8YEARS OLD
2 DAY OPTION. 1 DAY OPTION
$2700/10 Weeks $1350/10 Weeks
$1350/5 Weeks $675/5 Weeks
We can't wait to welcome your children to Book Nook this summer and to open their minds to new worlds of possibility. We hope you'll come along on the adventure! Contact [email protected] or (212) 873-2665 for more information. | https://booknooknyc.com/upper-west-side-summer-reading-programs/ |
Over the years, several designs for home furnishing and interior decorating have been developed and include the nature-themed knotty pine paneling. But despite the outdoor feel that is rendered to one’s house, knotty pine panels seem to fade with time and its natural beauty also appears to degrade. Refurnishing it to appear as if it was newly installed is necessary, especially for old houses. This is accomplished easily with the use of oil-based primers. Here are steps how to do it.
Step 1- Preparing the Panel for Coating
Since the knotty pine panel has been exposed for many years, it is best to remove all dirt which is adhered to it so as to give a glossy finish after the coating of the oil-based primer. Accomplish this by using any all-purpose cleaner of your choice and water. Wet the whole panel first with water and then diligently work from top to bottom removing all grime attached to it. Once done, wash off the soapy mixture and dirt with a fresh batch of water. Allow the clean panels to dry. When the panels have completely dried, roughen the surface of the panel in order to provide a venue for the coating to seep into the panel and ensure that it will adhere well. For better rough texture, use fine grit sandpaper and proceed as evenly as you can, ensuring that you cover the area thoroughly.
Step 2 – Cover the Spaces which Do Not Want to Be Painted
Once the entire panel has been cleaned and roughened, secure the spaces or spots which you do not want to be painted. Use old newspaper to cover areas like windows, doors, and other trim areas, and fasten the newspapers to the edges using a masking tape. It would help if you at least double the sheet of paper that you place to cover these spots.
Step 3 – Application of Oil-Based Primer to the Panel
Start the application of the oil-based primer on the knotty portion of the panel. Use a paint brush to do this. When applying oil-based primer to this area, allow a few minutes for the primer to soak into the grooves. Make sure to cover all knotty sections to give your panel a smooth finish. Always work from one side going to the other and never jump between spots when you apply the oil-based primer. This is to ensure that you fully cover all areas, especially the knotty sections of the panel, as well as to limit smudging and uneven application of the primer. After doing this, coat the entire panel with oil-based primer using a roller. Again work from one side to the other. You want to have a smooth finish so you must ascertain that you painted the panel evenly with the primer. | https://www.doityourself.com/stry/how-to-use-oilbased-primer-over-knotty-pine-paneling |
Our seating renovation story
We started planning the seating renovation at the end of 2019, but it wasn’t until the beginning of 2022 that we removed the first row and started this journey. Follow along with us as we make our way through this project.
Here’s where our story starts. We have 10 rows of older seats in the front, and 7 rows of new(er) rocking chair in the back, a total of 215 seats. We decided to update the seats instead of replace them to keep the historic setting and preserve our history.
A view of the seats before renovations.
First Row – 107 volunteer hours – February 4, 2022 – May 20, 2022
First, we removed the backrest of the seats, this is held on by 4 bolts and 4 nuts and set them on our stage.
Next, we removed the seats that are attached the the metal arms, these are attached by 2 large bolts, washers, lock washers, and nuts, and set them on the stage.
Then, the metal arms that attach to the floor. There are bolts in the concrete floor, and the arms are attached to those in four places.
After the seats are taken apart, they must then be disassembled. The metal parts and the upholstery parts must be separated, because they will be going to different places. The metal parts are transported to Maguire Products in Aston to be shot blasted, and after that, they are transported to A.W. Mercer to be powder coated. The upholstery parts go to a local upholsterer to be torn down and new foam and fabric to be installed.
We had to make jigs in order to put the seats back together, find specific hardware in order to replace the old, and figure out the whole process and how to actually get it done.
The first row is gone! This was a long and enduring process to get to this point, but here we are. Can’t stop now!
The wooden box on the floor is to cover the bolts sticking out of the concrete floor. We wanted to paint this in the same color as the floor so it would not look like we were “under construction”. We do try to make the theatre look presentable at all times, no matter what projects we have.
Below are a few pictures of the process of dismantling the seats, the metal from the fabric. The parts are taken to a warehouse and lined up. They need to be numbered so that they are installed in the same place when they get back to the theatre.
The seats and backs all look the same, but they are all very different measurements (only be a few milimeters) but it makes a big difference when they are re-asssembled, and especially because they have a new layer of paint on them.
The updated seats now have all new bolts, screws, washers, locking washers, etc, along with a new fabric pattern and paint.
ROW 2 – May 20, 2022 – July 4th, 2022 – 57.5 Volunteer Hours
ROW 3 – July 4th, 2022 – August 4th, 2022 – 60 Volunteer Hours
The rows are going faster now as we are streamlining our processes. Row 3 did have more chairs than the previous 2 rows.
ROW 4 – August 4, 2022 – September 4, 2022 – 57 Volunteer Hours
ROW 5 – September 4, 2022 – October 15, 2022 – 44 Volunteer Hours
ROW 6 – October 15, 2022 – November 13, 2022 – 35 Volunteer Hours
ROW 7 – November 5th, 2022 – November 19th, 2022 – 36 Volunteer Hours
We have started taking out rows before installing the row before, just to move the process along as much as we can. So far to this point, the 2 of us (Ken and Shannon Shaw) have put in 411 volunteer hours on this project, and counting.
ROW 8 – November 18th –
Come See Us!
Address:61 N Reading Ave, Boyertown
Phone Number484-415-5517
Make a Donation
Please consider making a donation to the Boyertown State Theatre: | https://boyertownstatetheatre.com/the-saga-of-the-great-state-theatre-seats/ |
How to Play Poker
The game of Poker has a number of rules that govern how to play. You must have a minimum hand in order to bet in the game, which is usually a pair of jacks. The more raises you make, the larger your pot will get. If you don’t have enough money to raise your stake, you will be forced out of the game. To avoid this problem, you must first learn how to play Poker. Here are some tips:
There are several different kinds of poker. Poker hands consist of five cards. The value of a poker hand is inversely proportional to its mathematical frequency. Players may make bets when they think they have the best hand and hope the other players will match the bet. This strategy is known as bluffing. Bluffing can win you the game by betting that you have the best hand, and the other players must match the bet.
The betting rounds of Poker last between two and four minutes. The betting process involves each player putting their chips into the pot before the dealer deals them out. If everyone has a hand, all but one player can choose to fold. Then, the remaining players must reveal their hands. In the final betting round, the player with the best hand wins the pot. The winning player collects the pot, which is known as a showdown. The pot size is determined by how much the player bets before the game. | https://thegeam.com/index.php/2022/07/17/how-to-play-poker-2/ |
Quantum was a color vector arcade game designed by Atari Inc. in 1982.
Gameplay[edit | edit source]
The premise of the game was related loosely to quantum physics in that the player directed a probe with a trackball to completely circle atomic "particles" for points, without touching various other particles. Once the particles were surrounded by the probes' tail they were destroyed.
Entering one's initials for the game's high score table was unique compared to all other games of the era; the player would use the trackball to circle the letters of his or her initials in the same fashion that was used to circle the particles during gameplay. If the player achieved the highest score on the table, the initials screen was preceded by another in which the player would use the trackball to actually draw his or her initials in an entry box. Some players were adept enough with the trackball to actually write their names legibly in the box.
The Particles[edit | edit source]
- Electrons: 20 points - Rotated slowly around the nucleus
- Nuclei: 300 points - Moved slowly around, bouncing off walls. Would clip the probe's tail if it crossed it. Capturing all the nuclei on the screen advanced play to the next level.
- Photons: 200 points - Entered from one edge of the screen, span across the screen, and disappeared off the other side.
- Pulsar: 400 points - Travelled towards the probe, pulsing its "arms" in and out as it moved.
- Positrons: 200 points - Formed by stray electrons left when a nucleus exploded. Moved from its point of origin to the edge of the screen very quickly.
- Splitters: 100 points - Travelled in a random pattern across the screen, flashed colors and split into 3 after a few seconds, each of these 3 splitting again after a few more seconds.
- Triphons: 100 points - Moved around the screen randomly.
Trivia[edit | edit source]
- This was one of two games designed for Atari by General Computer Corp. (the other being Food Fight) as a result of a legal settlement between Atari and GCC. The production run for this game is rumored to have been around 500. The game did poorly in the arcades and rumor has it that some disgruntled operators returned the game to Atari. Many unsold/returned units were sold to the public and Atari employees. | https://gamicus.gamepedia.com/Quantum |
Ed. note: Critical path practitioners have for some time discounted the value of three time estimates and variance calculations as being unrealistic and impractical to use. In this article, the authors take a new look at the uses of variance which may be of practical use in certain types of projects.
Introduction
Project management frequently uses network diagrams to plan the project, evaluate alternatives and control progress toward completion. The two most common networking techniques, CPM (Critical Path Method) and PERT (Program Evaluation and Review Technique), while having much in common, were independently derived and are based on different concepts. Both techniques define the duration of a project and the relationships among the project’s component activities. CPM uses a single deterministic time estimate to emphasize minimum project costs while downgrading consideration of time restraints. PERT, on the other hand, uses three time estimates to define a probabilistic distribution of activity times which emphasizes minimum project duration while downgrading consideration of cost restraints. It is therefore not surprising that CPM is often the choice of cost-conscious private industry, while PERT tends to be used more frequently in critically-timed government-related projects.
While these two techniques are based on different assumptions, they cannot be altogether independent of one another because of the obvious relationship between time and cost. Perhaps the “ideal” network technique would combine the concepts of CPM’s crashing strategy with PERT’s probability distribution of activity times to derive the optimum project duration and cost. While this goal has yet to be achieved for application to practical projects, much insight can be obtained into the “real world” effects of crashing strategies by using PERT’s probability distribution to allow the activity times to vary from the estimate, as they obviously do in actual projects.
When PERT is used on a project, the three time estimates (optimistic, most likely, and pessimistic) are combined to determine the expected duration and the variance for each activity. The expected times determine the critical path, and the variances for the activities on this path are summed to obtain the duration variance for the project. A probability distribution for the project completion time can also be constructed from this information (a procedure that is all too frequently ignored in practice). However, the variances of activities which do not lie on the critical path are not considered when developing this project variance, and this fact can lead to serious errors in the estimate of project duration.
A similar problem exists when the CPM technique is used to develop a crashing strategy where two or more paths through the network have nearly the same length. If the usual assumption of deterministic activity times is dropped and the activity duration is allowed to vary, a decrease in the length of the critical path may not result in an equivalent decrease in the project duration because of the variances inherent in the parallel or alternate paths. These variations of activity times can even allow the alternate path to become critical in specific instances, as indicated in figure 1. Thus, simply allowing the activity times to vary slightly from their estimates, as they do in every actual project, can cause serious errors in a CPM crashing strategy and lead to wasted resources and cost overuns.
FIGURE = 1
In this paper, probabilistic activity times are derived for a simplified project network, and the project’s duration is determined at successive crash levels using simulation techniques. At each crash level this simulated project duration is compared to the traditional CPM results. The purpose is to analyze the effect of the probabilistic activity times on the estimates of project duration and cost as both the activity variances and the number of parallel paths are allowed to increase. The ultimate goal of this research is to develop a series of “rules-of-thumb” which the practitioner can use to develop a more accurate and cost-effective crashing strategy for his project.
Previous Development
As it was originally developed, CPM was principally concerned with establishing activity prerequisits (technological ordering) and determining basic network solutions (7,5). It was useful primarily as a vehicle for keeping track of activities in the project and for identifying and analyzing activity conflicts or sequencing flexibilities which could effect project completion. Later development led to the use of cost slope analysis as a means of calculating the shortest project time within the constraints of overhead costs and subject to the assumption of deterministic time estimates (10). Basically, this technique solves for the effective savings in time when it is possible to “crash” or shorten the individual activity times. The time actually saved (effective time) is divided into the cost increase resulting from compressing the activity to determine the net cost slope (see exhibit #5). These slopes are then compared to the overhead costs to determine if reducing the project duration will result in an overall cost savings. Finally, linear programming techniques were developed to solve the entire problem (11) resulting in a method for scheduling a project’s activities at the most cost effective time. Note, however, that this entire development is based on the basic CPM assumption of deterministic activity time estimates, as assumption which has already been described as leading to potentially serious errors in practical applications.
PERT, on the other hand, was specifically designed to overcome the problems inherent in assuming deterministic activity time estimates (9). Three time estimates for each activity (optimistic, most likely, and pessimistic) are required, but these allow the user to develop a probability distribution for the length of each activity. With this accomplished, the technological ordering and network solutions are calculated in a manner practically identical to that of CPM. Since the initial development of PERT, several studies have analyzed the activity time errors that can arise from using the basic PERT quantitative assumptions (8,12), but research has also shown that use of the PERT estimates leads to results which are much more accurate than those obtained from CPM (13).
In addition, Van Slyke has used a Monte Carlo technique to simulate a PERT network in an effort to decrease the effect of network errors (14), errors which are common to both PERT and CPM. This present paper continues this analysis of network errors, and also incorporates a consideration of activity time variance into the development of a more realistic CPM crashing strategy.
Experiment:
BASIC NETWORK. To analyze the effects of probabilistic activity times and parallel paths on the crashing strategy of a network it was necessary to design an investigation technique that would be sensitive to slight variations in input data. This would permit changes, such as an increase in variance or an increase in the number of paths, to directly effect the network solution. A very basic network was constructed consisting of two paths with three activities on each path (exhibit #1). allowing for both ease of calculation and flexibility. The supporting data for this network and the time activity diagram are presented in exhibits #1 and #4 respectively. This data includes estimates of the optimistic time (a), the most likely time (m) and the pessimistic time (b) for each activity at the normal and three crash levels. These estimates allow the variance to be calculated as per the normal PERT procedure. A symmetrical distribution of activity times was used to avoid contaminating the results with any possible effects of skewness. In the experiment the estimates were used to determine an expected time and variance for each activity and to provide an input to the program which simulated the actual activity completion times.
Three crash levels were included to insure enough compressions so that the initial critical path would be shortened to the same length as the alternate path. As can be seen from exhibit #4 Part A, the difference between the two paths using PERT expected times is six days. The three crash levels allow each path to be shortened a total of nine days, one day at a time; therefore, at some level in the crashing sequence the length of the large path will approach the length of the shorter path. Exhibit #1 also includes the costs associated with the normal and crash times for each activity. The cost slopes were computed by dividing the increase in costs by the expected time saved. For example, if activity A is crashed from normal to 1st crash (N-S1) the cost slope would be 5/1. Since the costs of an activity usually increase in actual projects when the time available is shortened, all cost slopes were assumed to increase as the crash level increased (see exhibit #5). Care was taken to exclude any data which would lead to a unique solution not representative of the effect of changes in variances of activities or in network structure.
ANALYSIS OF THE BASIC TWO PATH NETWORK BY TRADITIONAL CPM METHODS. First, the basic PERT approach was used to determine the expected times for each activity. These times were then used as if they were deterministic to solve for the critical path using the usual CPM techniques. ABC was found to be the critical path with a length of 55 days. The parallel path DEF had a length of 49 days. At this point a crashing strategy, following the CPM approach of cost slope evaluation, was developed. The results of this strategy are presented in exhibit #6. On the 7th compression the two paths had the same length and it was necessary to crash both paths simultaneously (parallel crash) to further reduce the project length.
ANALYSIS OF THE BASIC NETWORK BY SIMULATION. The activity times were simulated by using the a, m, and b values to determine a distribution for each activity as per the usual PERT technique. Then a random number system was used to define an assumed actual activity duration for each activity at each trial. The critical path and project duration were determined at each trial by summing the durations of the activities on each path. Each compression of each trial required a separate simulation. The results of these simulations are presented and compared with those derived from the traditional CPM technique in exhibit #6.
The complete experiment was conducted using the following variations in the data and/or the network:
- The simulation was conducted with the basic two path network. All activity distributions were symmetrical with a range of 3 and a variance of .25.
- The variance was increased to .694 and the range to 5. The data for this trial is presented in exhibit #2.
- A third path was added to the network. This path was given an initial length equal to the shorter of the two original paths. The basic data for this network is presented in exhibit #3.
The information generated from these experiments is summarized in exhibit #6. The simulation was ended after 500 iterations. The mean results were calculated and are plotted against the traditional CPM results in exhibits #7 and #8.
Analysis Of Results
Exhibits #7 and #8 indicate that when the simulated activity times are used the sub-critical path begins to effect the project duration earlier in the compression sequence than when the deterministic estimates are used. Specifically, in exhibit #7 the third compression produces an effective time saved of only .994 days while the traditional CPM would yield an entire day. This effect was caused by the variance in the activity times. The effective time saved decreases even further as the parallel path lengths converge at successive compressions. This decrease in the effective time saved caused the net cost slopes to increase, indicating that the variance could have a direct effect on the crashing strategy. The data in exhibit #6 shows that in all cases the net cost slope in the 6th compression was greater than the corresponding value in the 7th compression.
In the two path network with the variance increased to .694, the net time saved decreased even further. This caused the subcritical path to influence the expected project duration at an earlier compression and with greater magnitude. This in turn increased the net cost slope which would further modify the crashing strategy.
When the third path was added to the network and the variance held at .25 the additional path increased the parallelism effect. This increase was considerably pronounced in the 5th and 6th compressions, and provides further evidence that the variance will have a direct effect on the crashing strategy.
The experiment has clearly shown that an increase in variance or an increase in the number of alternate paths through the network will cause estimates of project durations to increase. Since the PERT-type estimates have been shown to be more accurate than the CPM deterministic estimates (13) this indicates that the CPM estimating techniques will tend to underestimate project durations. In addition, traditional methods will continue to crash activities along the critical path after the variance of a shorter path is effecting the completion time. This could cause the project manager to underestimate both his individual activity and his total project costs. There are two reasons. The cost of crashing the project is actually greater than expected because, if needed time is actually to be saved, the manager must parallel crash his independent paths earlier than traditional techniques would indicate. Second, if he follows the traditional crashing strategy, the expected time saved cannot be realized and hence the overhead costs are not reduced as expected. In summary, the net cost slope calculations have shown that in all cases parallel crashing should be considered at a earlier time in the crashing sequence than the traditional approach would suggest.
Summary And Conclusions
In this paper, simulation techniques were used to determine project duration at successive crash levels while considering activity time variance for each activity, and the project duration at each level was compared to the traditional CPM results. The experiment clearly indicates that traditional CPM methods used by project management to estimate project duration and costs may be overly optimistic. The optimism arises from the traditional failure to consider the effects of variability in path completion times on the crashing strategy.
In situations where accuracy is important this optimism can be reduced by recognizing the following situations and adjusting the estimating technique accordingly:
- The expected activity times derived from a three estimate, PERT-type calculation provides a more accurate estimate and also allows the activity time variance to be calculated and included in the estimates of project duration.
- As the variances of the activities increase, the estimate of project duration calculated by traditional methods becomes less accurate. Hence, the greater the activity variance the greater the optimism demonstrated by traditional CPM methods.
- As the number of parallel paths through the network increases the estimates of project duration calculated by traditional methods become less accurate. Hence, the greater the number of paths through the network, the greater the optimism demonstrated by traditional CPM methods.
The use of activity variances and simulation techniques to derive the project duration leads to more accurate calculations of effective time saved and net cost slopes, which in turn yield a crashing strategy incorporating parallel crashing at an earlier time in the compression sequence. In this way, much of the optimism can be removed from estimates of project completion and project costs.
In some special instances it may also be possible to decrease the optimism by employing a special technique:
- If several activities exist which have large variances relative to the rest of the project, it may be possible to decrease the effect of the variance in any associated path by crashing these activities. This would effectively decrease the influence of that particular path on the completion time.
- In a case where the cost slopes along the critical path are relatively high compared to the project overhead costs, it would not be economical to crash these activities. However, if cost slopes on alternate paths were low relative to overhead costs it might be possible to reduce any further effect of the alternate paths by crashing at the lower cost.
- When activity time distributions are skewed in either a positive or a negative direction, estimates and actual time occurances should be expected to fall in the direction of the skewness. Awareness of this possibility could be a considerable benefit in the development of a crashing strategy.
It should also be noted, however, that this experiment was based on a network which presupposed independence of the alternate paths. If cases occur where there is some degree of dependence between the paths, caused perhaps by common or interconnecting activities, the effect of the variance in individual paths should be expected to decrease slightly from the results shown by this experiment.
As he derives his estimates, each project manager must face the possibility of this optimism effecting his crashing strategy. Although it may not always be possible to anticipate problem areas in a large and complicated project, it may be possible to reduce their impact by being aware of the effect that activity time variance can have on project estimates and crashing strategies.
To this point the research has defined the optimism which exists in project estimates, and the effect it can have on the development of a crashing strategy has been clarified. This continuing research effort is rapidly developing “rules of thumb” which may in the future prove useful in providing more accurate and cost effective crashing strategies for realistic projects.
BIBLIOGRAPHY
1. Donaldson, W. A., “Estimation of the Mean and Variance of a Pert Activity Time”, Operations Research, 13:382-385, May 1965.
2. Elmaghraby, Salah E., “On the Expected Duration of Pert Type Networks,” Management Science, 13:299-306, January 1967.
3. Fulkerson, D.R., “Expected Critical Path Lengths in Pert Networks”, Operations Research, 10:808-817, November 1962.
4. Hartley, H.O., and A.M. Worthaw, “A Statistical Theory for Pert Critical Path Analysis”, Management Science, 12:B-469 to B-481, January 1966.
5. Kelly, James E„ Jr., “Critical-Path Planning and Scheduling: Mathematical Basis”, Operations Research, 9, 296-320, 1961.
6. Klingel, A.R., Jr., “Bias in Pert Project Completion Time Calculations for a Real Network”, Management Science, 13:B-194 to B-201, December 1966.
7. Levy, F.K., G.L. Thompson, and J.D. Wiest, “An Introduction to the Critical Path Method”, Industrial Scheduling, J. F. Muth and G. L. Thompson (eds.) Prentice-Hall, Englewood Cliffs, N.J., 1963.
8. MacCrimmon, K.R., and C.A. Ryavec, “An Analytical Study of the Pert Assumptions”, Operations Research, 12:16-36, January-February 1964.
9. Malcolm, D.G., J.H. Rosenbloom, C.E. Clark, and W. Fazer, “Application of a Technique for Research and Development Program Evaluation”, Operations Research, 7, No. 5, 1959.
10. Swanson, Lloyd A., and H.L. Pazer, Pertsim Text and Simulation, International Textbook Company, Scranton, Pennsylvania, 1969.
11. Swanson, Lloyd A., Linear Programming: “An Approach to Critical Path Management”, Project Management Quarterly, Volume 4, No. 1, March 1973.
12. Swanson, Lloyd A., and H. L. Pazer, “Implications of the Underlying Assumptions of Pert”, Decision Sciences, Volume 11, October 1971, P.P.461-480.
13. Swift, F.W., “An Investigation of the Relative Accuracy of Three-time Versus Single-Time Estimating of Activity Duration in Prospect Management”, Unpublished Ph.D. Dissertation, Oklahoma University, 1971.
14. Van Slyke, R.M., “Monte Carlo Methods and the Pert Problems” Operations Research, 11:839-860, September 1963.
Advertisement
Advertisement
Advertisement
Publishing or acceptance of an advertisement is neither a guarantee nor endorsement of the advertiser's product or service. View advertising policy. | https://www.pmi.org/learning/library/time-variance-critical-path-planning-1959 |
The trials and tribulations of a Canadian business titan during a fascinating period in 19th-century Quebec.
A Mind at Sea is an intimate window into a vanished time when Canada was among the world’s great maritime countries. Between 1856 and 1877, Henry Fry was the Lloyd’s agent for the St. Lawrence River, east of Montreal. The harbour coves below his home in Quebec were crammed with immense rafts of cut wood, the river’s shoreline sprawled with yards where giant square-rigged ships – many owned by Fry – were built.
As the president of Canada’s Dominion Board of Trade, Fry was at the epicentre of wealth and influence. His home city of Quebec served as the capital of the province of Canada, while its port was often the scene of raw criminality. He fought vigorously against the kidnapping of sailors and the dangerous practice of deck loading. He also battled against and overcame his personal demon – mental depression – going on to write many ship histories and essays on U.S.-Canada relations.
Fry was a colourful figure and a reformer who interacted with the famous figures of the day, including Lord and Lady Dufferin, Sir John A. Macdonald, Wilfrid Laurier, and Sir Narcisse-Fortunat Belleau, Quebec’s lieutenant-governor.
" The first book in English to tell the little-known story of Quebec City's shipbuilding era since.. 1995." | http://dundurn.com/books/Mind-Sea |
A book of remembrance has been launched for all those who have died as a result of the COVID-19 pandemic.
Remember Me is an online memorial that today, St Paul’s Cathedral launched for those in the UK – of all faiths, beliefs or none – who have died as a result of the COVID-19 pandemic.
Family, friends and carers of those who have died can submit, free of charge, the name, photograph and a short message in honour of a deceased person via the Remember Me website.
Cllr Shelley Powell, Knowsley Council’s Cabinet Member for Communities and Neighbourhoods said: “This virtual book of remembrance is a way of helping us remember those we have lost to this terrible disease and hopefully finding comfort in our memories.”
Members of the public can register details of their family member or friend on Remember Me: A book of remembrance for the UK via: www.rememberme2020.uk and in time it is also intended that the online memorial will become a physical memorial at the Cathedral.
The deceased person must have been living in the UK. Remember Me will be open for entries for as long as needed. | https://www.knowsleynews.co.uk/remembering-loved-ones-lost-during-pandemic/ |
If you missed the Empty Bowls fundraising event for the Share Outreach Food Pantry, you missed a chance for some really great food and, of course, the opportunity to help an organization that is there to help people in need.
The symbol of the Empty Bowls event is a large number of empty bowls, some made by students of Janice Shaughnessy’s Phoebe’s Barn pottery studio in Mont Vernon and some by students from Hollis Brookline and Wilton-Lyndeborough high schools. The former were for sale, the latter were there for the taking.
The event began at noon on March 31 and ran until 2, and by 12:30, the Share building in Milford was packed with people eating food from Black Forest, Pasta Loft, Union Street Girll, Papa Joe’s Humble Kitchen, LongHorn Steakhouse, Moulton’s Market, the Mont Vernon General Store, and wickedpissahchowder.
And while Empty Bowls was certainly about food, it was more about helping Share to be ready to ensure that folks in the area have a place to turn when they’re having a hard time. The organization provides food, clothing, and even emergency financial assistance to people in need in the Souhegan Valley and annually distributes about 135,000 meals to more than 500 families.
As you can imagine, that takes a massive amount of money and donations of goods. Events like Empty Bowls go a long way toward making sure that Share is ready.
You should be ready, too, for next year’s Empty Bowls. It’s not expensive, it’s fun, it’s a real community event, and it’s vital. | https://www.nashuatelegraph.com/opinion/editorials/2019/04/11/event-vital-to-community/ |
Analyzing road scenes using cameras could have a crucial impact in many domains, such as autonomous driving, advanced driver assistance systems (ADAS), personal navigation, mapping of large scale environments and road maintenance. For instance, vehicle infrastructure, signage, and rules of the road have been designed to be interpreted fully by visual inspection. As the field of computer vision becomes increasingly mature, practical solutions to many of these tasks are now within reach. Nonetheless, there still seems to exist a wide gap between what is needed by the automotive industry and what is currently possible using computer vision techniques.
The goal of this workshop is to allow researchers in the fields of road scene understanding and autonomous driving to present their progress and discuss novel ideas that will shape the future of this area. In particular, we would like this workshop to bridge the gap between the community that develops novel theoretical approaches for road scene understanding and the community that builds working real-life systems performing in real-world conditions. To this end, we plan to have a broad panel of invited speakers coming from both academia and industry.
We encourage submissions of original and unpublished work in the area of vision-based road scene understanding. The topics of interest include (but are not limited to):
Road scene understanding in mature and emerging markets
Deep learning for road scene understanding
Prediction and modeling of road scenes and scenarios
Semantic labeling, object detection and recognition in road scenes
Dynamic 3D reconstruction, SLAM and ego-motion estimation
Visual feature extraction, classification and tracking
Design and development of robust and real-time architectures
Use of emerging sensors (e.g., multispectral imagery, RGB-D, LIDAR and LADAR)
Fusion of RGB imagery with other sensing modalities
Interdisciplinary contributions across computer vision, robotics and other related fields.
We encourage researchers to submit not only theoretical contributions, but also work more focused on applications. Each paper will receive 3 double blind reviews, which will be moderated by the workshop organizers. The submission site is: https://cmt3.research.microsoft.com/CVRSUAD2018. More information regarding the submission process can be found at https://cvrsuad.data61.csiro.au. | http://www.wikicfp.com/cfp/servlet/event.showcfp?eventid=90747©ownerid=35386 |
Guys. It is supposed to be almost 100 degrees in Seattle this week. That’s right… ONE HUNDRED! I think it goes without saying that I am NOT turning my stove or oven on once this week! I woke up early this morning to make us some breakfast muffins so that we have something quick to grab on our way out the door; something that does not need to be reheated.
These are very quick and easy! About 5-10 minutes of mixing and 20 minutes of baking. Just a good old-fashioned wet bowl mixed into dry bowl type of recipe. If you an early morning riser, these are absolutely something that can be whipped together while you are getting ready for the day!
My thoughts on this recipe – I enjoy substituting sour cream or plain yogurt into recipes where people would usually use milk or cream. I find the batter is a little thicker and I like the size and consistency a little better. If you wanted to sprinkle a little granulated sugar on these as you put them into the oven, I think that would be a nice touch but is not necessary! You can of course leave out the walnuts entirely, or sub in 1/2 a cup of something else such as shaved coconut or dried fruit like craisins.
Ingredients:
- 2C All Purpose Flour
- 1T Baking Powder
- 1/2t Baking Soda
- 1/2t Salt
- 1/2t Cinnamon
- 1/4t Nutmeg
- 2 Large Eggs
- 1C Sour Cream
- 2/3C Packed Light Brown Sugar
- 1t Vanilla Extract
- 4T Butter, Melted
- 1/2C Mashed Ripe Bananas (I just used 1 banana without measuring)
- 1/2C Chopped Walnuts
Preheat your oven to 400 degrees Fahrenheit. Butter a muffin tin, or use muffin cups (ideal) to prevent batter from sticking to the pan.
Measure the flour, baking powder, baking soda, salt, cinnamon and nutmeg into a large bowl. Stir briefly with a whisk, only to combine.
In a medium bowl, mix your eggs, sour cream, vanilla, and butter. Stir in your banana and walnuts. Pour this into the dry ingredient bowl, again stirring only to combine.
Pour your batter evenly across the 12 muffin cups. Bake for approx. 17-20 minutes, until a toothpick or knife can be inserted and come out clean. Let cool for 3-5 minutes before removing from muffin tin.
I like to serve hot, sliced in half with a little butter or cream cheese in the middle 🙂
Enjoy!!
X.O. Abbey Co.
Suggested kitchen gear for this recipe:
Silicone Muffin Pan / Tin Cupcake Mold by Daisy’s Dream – 12 Cup Silicone Pan / Baking Tray – Easy To Use – Simple To Clean, Red Bakeware
Premium Silicone Pot Holder,Trivets,Hot Mitts,Spoon Rest And Garlic Peeler Non Slip,Heat Resistant Hot Pads,Multipurpose Kitchen Tool. 7×7″ Potholders(Set of 6) Non Slip,Dishwasher Safe,Durable. | https://abbeycoseattle.blog/2017/08/01/sour-cream-banana-breakfast-muffins/?replytocom=2107 |
What I Found On Matt Damon (2020-05-23)
What’s up everyone. I have found some interesting information on Matt Damon, current as of 2020-05-23. I personally really like Matt Damon, so was eager to do some deep research into them. Let’s get started!
First… how popular is Matt Damon right now? On Google Trends Matt Damon had a popularity ranking of 25 ten days ago, 21 nine days ago, 83 eight days ago, 100 seven days ago, 55 six days ago, 51 five days ago, 49 four days ago, 27 three days ago, 27 two days ago, 26 one day ago and now has a popularity rank of 29. So in the recent past, they were gathering the most attention on 2020-05-14 when they had a rank of 100. If we compare Matt Damon’s popularity to three months ago, they had an average popularity of 26.9, whereas now their average popularity over the last ten days is 46.6. so by that measure, Matt Damon is getting more popular! It’s worth noting, finally, that Matt Damon never had a rank of 0, indicating people are always searching for them 🙂
And what about how Matt Damon has fared if we consider the entire past 3 months? Our date indicates 2020-05-14 to be their most popular day, when they had a relative rank of 100. Not bad!
I did some more tiring analysis today on the web sentiment regarding Matt Damon, and found a number news articles about them in the past month. I may update this post later when I analyze the best ones
Considering everything written about Matt Damon (not just official news articles), I decided the overall sentiment for Matt Damon on 2020-05-23 is decent. But what do you think? I’d like to hear your comments (just don’t write anything too mean). | |
1. Field of the Invention
The present invention relates to a canceling structure of a combination switch, such as a turn signal switch for activating, for example, a turn signal lamp of a vehicle.
2. Description of Related Art
As such a canceling structure, there is a canceling structure of a turn signal switch as previously proposed by the inventor in JP-A-2001-10406. A turn signal switch incorporating this canceling structure is shown in FIGS. 4 and 5.
In this case, a swing block 2 having a control lever 1 is rotatably supported by a casing 3 and a cover 6 around a shaft 2a, and a cancel cam 9 is disposed between the swing block 2 and the cover 6. Referring to FIG. 5, the control lever 1 can be rotated vertically in the direction shown by arrows C and D around a lever shaft Q relative to the swing block 2; however, referring to FIG. 4, the control lever 1 is rotated horizontally in the direction shown by arrows A and B along with the swing block 2 around the shaft 2a to move in the direction corresponding to the rotation of a steering shaft 14.
The swing block 2 includes an arm 2k for pushing a moderate body 8 at the end thereof biased by a moderate spring 7 against a moderate groove 3b formed in the casing 3.
An upper shaft 9b of the cancel cam 9 can be moved along a long hole 6b formed in the cover 6 in a normal mode. As particularly shown in an enlarged view of FIG. 6, the cancel cam 9 is biased longitudinally toward a steering shaft 14 by a coil spring 11 via an elastic contact material 12. In the drawing, reference numeral 6d denotes a guide formed at the lower surface of the cover 6 and guides the elastic contact material 12 on the line connecting the steering shaft 14 with the shaft 2a of the swing block 2.
Returning to FIGS. 4 and 5, the swing block 2 is provided with a support groove 2c for receiving a lower shaft 9a coaxial with the upper shaft 9b of the cancel cam 9. The support groove 2c includes an angular projection 2d at the center thereof in the direction from the steering shaft 14 toward the shaft 2a of the swing block 2. When the control lever 1 is in a neutral position, the lower shaft 9a of the cancel cam 9 is positioned at the top of the angular projection 2d and a butting portion 9c is positioned out of a rotating path of the cancel pin 10.
As shown in FIG. 6, the cancel cam 9 includes a pressure contact surface 9e in contact with the upper shaft 9b and coming into contact with a pushing face 12a of the elastic contact material 12, the butting portion 9c extending toward the steering shaft 14, and a pressure portion 9d extending toward the shaft 2a for pushing against a cam guide 13, which will be described later. FIG. 7 is an enlarged perspective view showing the cancel cam 9, the elastic contact material 12, and the coil spring 11.
The cam guide 13 is formed in substantially U shape in FIG. 4, which is slidably placed on the swing block 2 only along the shaft 2a of the swing block 2, is pushed by a pressure spring 15 to be biased toward the cancel cam 9, movement of projections 13a formed on right and left sides thereof toward the steering shaft 14 is limited by a support 2h, and the inner wall in the U-shape is separated from the pressure portion 9d of the cancel cam.
When the control lever 1 is operated toward a leftward indicating position shown by arrow A to rotate the swing block 2 integrated with the control lever 1 around the shaft 2a, the angular projection 2d of the support groove 2c is moved and the cancel cam 9 is pushed by the elastic contact material 12 biased by the coil spring 11, and the lower shaft 9a slides down the slope of the angular projection 2d to move to the base of the angular projection 2d and the upper shaft 9b is moved toward the steering shaft 14 along the long hole 6b in the cover 6. Consequently, the butting portion 9c of the cancel cam 9 projects into the rotating path of the cancel pin 10 which rotates along with the steering shaft 14.
The cam guide 13 is rotated around the shaft 2a along with the swing block 2. As a result, the sidewall of the cam guide 13, on the upper side in FIG. 4, is positioned in contact with the side end of the pressure portion 9d of the cancel cam 9. The swing block 2 slides a moving part having a moving contact for a turn signal lamp, and thus, the moving contact is brought into contact with a fixed contact to blink a left turn signal lamp.
In this state, when a steering handle is operated in the same direction as the control lever 1 to rotate the steering shaft 14 leftward in the direction shown by arrow J, the cancel pin 10 pushes the butting portion 9c in the direction shown by arrow E to rotate the cancel cam 9 around the lower shaft 9a and the upper shaft 9b.
During the rotation, the pressure portion 9d of the cancel cam is only separated from the sidewall of the cam guide 13 that was close thereto, not obstructing the rotation.
By the rotation, the pressure contact surface 9e of the cancel cam 9 is rotated to compress the coil spring 11 via the elastic contact material 12; however, since a contact point between the pressure contact surface 9e and the elastic contact material 12 is separated from the lower shaft 9a and the upper shaft 9b, the cancel cam 9 is subjected to a rotating force by the coil spring 11, thereby returning to an initial state after the cancel pin 10 has been separated from the butting portion 9c.
During this period of time, since the swing block 2 pushes the moderate body 8 at the end of the arm against the cam groove 3b, the control lever 1 is held in the leftward indicating position rotated in the direction shown by arrow A.
Next, in this leftward indicating position, when the steering handle is operated to rotate the steering shaft 14 rightward in the direction shown by arrow K, the cancel pin 10 pushes the butting portion 9c in the direction shown by arrow F to rotate the cancel cam 9 around the lower shaft 9a and the upper shaft 9b.
By this rotation, the pressure portion 9d of the cancel cam pushes at the sidewall of the cam guide 13 which was close thereto. Accordingly, the swing block 2 having the cam guide 13 is rotated in the direction shown by arrow B to return to the neutral position, and the control lever 1 also returns to OFF position automatically, thereby turning off the left turn signal lamp.
With the return of the swing block 2, the angular projection 2d of the support groove 2c biases the lower shaft 9a to move the cancel cam 9 in the direction apart from the steering shaft 14, thus separating the butting portion 9c from the rotating path of the cancel pin 10.
The same goes for a case in which the control lever 1 is operated in the direction shown by arrow B, except that the direction of operation is reversed.
When the control lever 1 is rotated leftward in the direction shown by arrow A, and with the position held by hand, the steering handle is rotated rightward in the direction of returning the control lever 1 automatically, the cancel cam 9 which is forced to rotate by the pressure of the cancel pin 10 rotating in the direction shown by arrow K rotates in the direction shown by arrow F to push the pressure portion 9d against the sidewall of the cam guide 13. In this case, since the sidewall of the cam guide 13 is inclined, the cam guide 13 is moved toward the shaft 2a against the pressure spring 15, the rotation of the cancel pin 10 and the cancel cam 9 is allowed, thus causing no damage by the application of an excessive force.
When the cancel pin 10 is further rotated and passes through the cancel cam 9, the cam guide 13 returns to a position before the cancel cam 9 abuts thereon by the elasticity of the pressure spring 15, and the cancel pin 10 returns there by the elasticity of the coil spring 11.
The same goes for the case in which the control lever 1 is operated inversely and held therein.
The other structures including the connecting structure of the control lever 1 to the swing block 2 and the operation thereof are specifically described in JP-A-2001-10406.
With such a structure, when the control lever 1 is operated to rotate the steering handle in a desired rotating direction, and then the steering handle is returned, the control lever 1 returns automatically to the neutral position.
The coil spring 11 for returning the cancel cam 9 onto a line connecting the steering shaft 14 and the shaft 2a of the swing block is arranged to be aligned with the line, thus having the advantage of reducing the occupied area of the returning structure and also the width of the casing.
In the canceling structure of the turn signal switch described above, the pressure contact surface 9e of the cancel cam 9 is formed as a plane in contact with the upper shaft 9b, and similarly, the pressure surface 12a of the elastic contact material 12 in contact with the pressure contact surface 9e is shaped in plane. However, after the butting portion 9c of the cancel cam 9 has been pushed by the cancel pin 10 rotating in the same direction as the control lever to rotate around the upper shaft 9b (and the lower shaft 9a), sometimes the cancel cam 9 cannot return smoothly.
The following may be a cause after the consideration of the above problems.
When the cancel cam 9 is rotated, the coil spring 11 is compressed to displace the elastic contact material 12 and the corner at the side end of the pressure contact surface 9e of the cancel cam 9 is brought into contact with the pressure surface 12a of the elastic contact material 12, as shown in FIG. 8.
Assuming that the rotating angle of the cancel cam 9 is xcex1, the angle formed by the pressure contact surface 9e and the line connecting the rotating shaft (upper shaft 9b) of the cancel cam 9 and the pressure surface 12a of the elastic contact material 12 is xcex2, the distance between the rotating shaft of the cancel cam 9 and the contact point is s, the pushing force from the elastic contact material 12 by the coil spring 11 is P, and the component force perpendicular to the line connecting the rotating shaft of the cancel cam 9 having a pushing force P and the contact point is W, component force W and moment T for returning the cancel cam 9 are expressed as follows:
W=Pxc3x97cos(xcex1+xcex2)
T=Wxc3x97s
Accordingly, it is considered to be a cause that since the angle (xcex1+xcex2) from the rotating shaft to the contact point is relatively large, value W decreases, thus not obtaining a sufficient moment.
If pressure force P is increased for measures against it, the size of the coil spring 11 is increased, making it difficult to arrange it in a narrow space between the cover 6 and the swing block 2.
Similarly, since the elastic contact material 12 must be decreased in length in the limited space on the swing block 2, the inclination of the elastic contact material 12 tends to increase, thus making it impossible to slide on the guide smoothly.
Accordingly, in consideration of the above problems, it is an object of the present invention to provide a canceling structure of a combination switch in which a sufficient moment is ensured to return the cancel cam, and a smooth motion of the elastic contact is provided without increasing the size of the spring.
To this end, according to an aspect of the invention, there is provided a canceling structure of a combination switch constructed such that a swing block for supporting a control lever is rotatably held on a fixing side; when the control lever is rotated from a neutral position, a cancel cam moves into the path of a cancel pin with the rotation of the swing block; and the cancel cam biases the swing block with the movement of the cancel pin in the opposite direction from the operating direction of the control lever, thereby returning the control lever to the neutral position, wherein the cancel cam comprises a shaft to be guided by a long hole provided on the fixing side in parallel with the direction to move into the path of the cancel pin; pressure contact surfaces are provided at positions on the peripheral surface of the shaft, closer to a steering shaft than to the control lever; when the cancel pin moves in the same direction as the operating direction of the control lever, the end thereof is pushed by the cancel pin, and allowed to rotate around the shaft; an elastic contact material having pressure surfaces that can be brought into contact with the pressure contact surfaces of the cancel cam is biased along a guide in the moving direction of the cancel cam; when the cancel cam rotates, the side end of the pressure contact surface is pushed by the pressure surface of the elastic contact material; and when released from the cancel pin, the cancel cam returns to a position before rotation by the pressure from the elastic contact material.
Since the pressure contact surfaces of the cancel cam pass through almost the center of the shaft, the component force in the direction of returning rotation by the pressure of the elastic contact material increases as compared with the case of being apart from the shaft.
According to another aspect of the invention, the shaft of the cancel cam expands from the pressure contact surfaces toward the elastic contact material and the pressure surfaces of the elastic contact material have a recessed portion for escaping the extension portion of the shaft.
Even when the shaft expands, the pressure surfaces of the elastic contact material can be brought into contact with the pressure contact surfaces of the cancel cam, thus increasing the length of the elastic contact material by the length that the pressure surfaces are extended to the pressure contact surfaces.
| |
Q:
Question in MATLAB about comparing values to pi
I want to find pi in MATLAB and when I do compare it with the pi that is already embodied in MATLAB.
So when I write
while(p~=pi)
the loop seems endless because it keeps testing for all the digits that the MATLAB pi has.
So when I wrote:
p=3.1416;
if p==pi
disp('yes');
else
disp('no');
end
the answer naturally was no. So I want to find a way to keep only five digits after the point and test with that, test for pi=3.14159.
Can anyone help?
A:
if abs(p-pi) <= 1e-5
See this Stack Overflow answer for details.
| |
Prop. Type:
Residential Detached
MLS® Num:
R2379352
Status:
Active
Bedrooms:
4
Bathrooms:
2
Year Built:
1958
Contact about this Listing
Open Printable View
Calculate Mortgage
Send Listing
LOCATION! LOCATION! Great quiet street in a central location. Same owners since 1989, pride of ownership is everywhere throughout the home! Open foyer entrance with gleaming hardwood floors. Living room w/ stone fireplace adjoining the dining room with french drs to your large view deck. Family sized kitchen w/ S/S appliances & separate eating area. 3 good sized bedrooms up including Master bedroom with 2 closets. Downstairs is a bright daylight walk out basement. Rec room with wet bar and summer kitchen. One large bedroom in basement with a family room that could be converted to second bedroom. Plenty of storage and large laundry room also in bsmt. Lane access with double car garage. In immaculate condition, this home will sell itself. Newer roof & windows. OPEN HOUSE SUN JULY 21 12-2PM
General Info:
Property Type:
Residential Detached
Dwelling Type:
House/Single Family
Home Style:
Rancher/Bungalow w/Bsmt.
Year built:
1958
(Age: 61)
Total area:
2,624 sq. ft.
243.78 m2
Total Floor Area:
2,624 sq. ft.
243.78 m2
Price Per SqFt:
$628.77
Total unfinished area:
0 sq. ft.
0 m2
Main Floor Area:
1,320 sq. ft.
122.63 m2
Floor Area Above Main:
0 sq. ft.
0 m2
Floor Area Below Main:
1,304 sq. ft.
121.15 m2
Basement Area:
0 sq. ft.
0 m2
Finished Levels:
2.0
Bedrooms:
4
(Above Grd: 4)
Bathrooms:
2.0
(Full:2/Half:0)
Kitchens:
1
Rooms:
14
Taxes:
$6,186.29 / 2018
Lot Area:
7,320 sq. ft.
680.05 m2
Lot Frontage:
60'
18.288 m
Lot Depth:
122
Rear Yard Exposure:
Northeast
Outdoor Area:
Balcny(s) Patio(s) Dck(s), Fenced Yard
Water Supply:
Community
Plan:
NWP17967
Additional Info:
Heating:
Forced Air
Construction:
Frame - Wood
Foundation:
Concrete Perimeter
Basement:
Full, Fully Finished, Separate Entry
Roof:
Asphalt
Floor Finish:
Hardwood, Tile, Wall/Wall/Mixed
Fireplaces:
2
Fireplace Details:
Wood
Parking:
Garage; Double
Parking Total/Covered:
4 / 2
Parking Access:
Lane, Rear
Exterior Finish:
Stucco, Wood
Title to Land:
Freehold NonStrata
Suite:
None
Room Information:
Floor
Type
Dimensions
Other
Main
Foyer
6'8"
2.03 m
x
4'11"
1.50 m
-
Main
Living Room
16'5"
5.00 m
x
13'2"
4.01 m
-
Main
Dining Room
10'7"
3.23 m
x
10'2"
3.10 m
-
Main
Kitchen
9'7"
2.92 m
x
9'1"
2.77 m
-
Main
Eating Area
12'2"
3.71 m
x
7'6"
2.29 m
-
Main
Master Bedroom
13'2"
4.01 m
x
11'5"
3.48 m
-
Main
Bedroom
11'4"
3.45 m
x
10'7"
3.23 m
-
Main
Bedroom
10'7"
3.23 m
x
8'11"
2.72 m
-
Below
Recreation Room
16'11"
5.16 m
x
13'7"
4.14 m
-
Below
Family Room
17'7"
5.36 m
x
12'9"
3.89 m
-
Below
Bar Room
12'10"
3.91 m
x
8'2"
2.49 m
-
Below
Storage
14'
4.27 m
x
10'1"
3.07 m
-
Below
Bedroom
14'
4.27 m
x
12'3"
3.73 m
-
Below
Laundry
13'5"
4.09 m
x
11'9"
3.58 m
-
Bathrooms:
Floor
Ensuite
Pieces
Other
Main
No
4
Below
No
3
Site Influences:
Central Location, Private Yard, Recreation Nearby, Shopping Nearby
View:
City
Legal Description:
LOT 9, PLAN NWP17967, DISTRICT LOT 38, GROUP 1, NEW WESTMINSTER LAND DISTRICT
Other Details:
Dist to Public Trans:
1 BLK
Dist to School Bus:
2 BLK
Property Disclosure:
Yes
Fixtures Leased:
No
Fixtures Removed:
No
Services Connected:
Electricity, Natural Gas, Sanitary Sewer, Storm Sewer, Water
Data was last updated July 20, 2019 at 08:10 PM (UTC)
TEAM WAYNE DICK
REMAX ALL POINTS REALTY
Tel:
1 (604) 512 1234
[email protected]
The data relating to real estate on this website comes in part from the MLS® Reciprocity program of either the Real Estate Board of Greater Vancouver (REBGV), the Fraser Valley Real Estate Board (FVREB) or the Chilliwack and District Real Estate Board (CADREB). Real estate listings held by participating real estate firms are marked with the MLS® logo and detailed information about the listing includes the name of the listing agent. This representation is based in whole or part on data generated by either the REBGV, the FVREB or the CADREB which assumes no responsibility for its accuracy. The materials contained on this page may not be reproduced without the express written consent of either the REBGV, the FVREB or the CADREB.
Superior MLS® Search
Start Searching!
Buying A Home
Start buying Now!
Selling Your Home
Start Selling Now!
Home
|
Who We Are
|
Properties
|
Member Login/Sign-Up
|
Testimonials
|
Contact
|
Buying
|
Selling
|
Home Evaluation
|
Site Map
Site Design is a copyright myRealPage.com Inc. All rights reserved. | http://waynedick.com/officelistings.html/listing.r2379352-4378-darwin-avenue-burnaby-v5g-3e5.88799873 |
Feed conversion efficiency (FCE) is the ratio of milk output to feed input, usually expressed as milk volume or solids yield per unit of dry matter intake (DMI). Feed conversion efficiency is usually considered only for the milking cows in a herd but there is a compelling case for considering FCE at the whole-farm level: efficiency gains in milking cows might in fact be offset by inefficiencies in other areas that influence overall FCE and profit. Whole-farm FCE (WFFE) is defined as total milk output divided by total feed produced or purchased for all animals on the dairy farm.
To help producers make decisions regarding their feeding strategies and the effects of modifications, there is a need for key performance indicators (KPIs) and drivers of WFFE appropriate to different production systems.
The main aim of the project is to quantify components of feed efficiency at the whole-herd level under a range of production and feeding systems and translate these into practical tools for use on farms.
Expected outcomes
- KPIs and drivers of whole-herd feed efficiency appropriate for UK conditions
- Farmer-friendly guidelines and a tool on how to calculate and improve whole farm feed efficiency
Start Date
June 2016
Completion Date
TBC
Lead Investigator
University of Nottingham
Funders
AHDB Dairy
For further information please contact: [email protected] or call 024 7647 8632
*This project is part of the Research Partnership between AHDB Dairy and the University of Nottingham (Lead Contractor). Other subcontracted investigators and delivery partners within the Research Partnership are: Harper Adams University, SRUC, RVC. | http://www.dairy.ahdb.org.uk/rd/nutrition/current-projects/whole-farm-feed-efficiency-(wffe)/ |
Hassan city in Karnataka, South India, offers an authentic experience which immerses visitors into the Karnatakan culture. Tradition, countryside, temples, and narrow roads lined with small shops, rub shoulders with shiny modern buildings.
The city is ideal for an excursion to the intricately carved stone temples at Belur and Halebid, whose intricate sculptures depict scenes from Hindu mythology and everyday life of the Hoysala Kingdom.
Also reachable from Hassan is the imposing statue of Gomateshvara at Shravanabelagola, which is a significant pilgrimage site for the Jain religion, aswell as Tipu's Summer Palace at Srirangapattanam, which was Tipu Sultan's summer residence until 1799. | https://www.transindus.co.uk/highlights/hassan/ |
After leaving the apple orchard with $44 dollars worth of apples, you are probably wondering, what happened to the apples?
Good question. I quickly went to work on getting rid of them all.
Luckily Jack had snack week the very next day, so we sent him in with 30 apples.
Next I put 9 in an apple pie. It was very pretty but I forgot to take a picture. I thought about it a few days later but it was already gone. It was also very tasty.
Next I made apple sauce in the slow cooker.
It required about 14 apples, if you add the diced apples with the apples I blended and HAND STRAINED to get apple juice. (I don’t have a juicer.)
It was the most delicious apple sauce I had ever tasted. I used it on these zucchini cakes.
Sometimes I ate it with greek yogurt too.
Another 6 apples were used to make an apple tart.
I cheated, that’s store bought puff pastry. I just sliced apples then sprinkled with sugar, dotted with butter and baked. When it looked done I took them out and brushed them with apricot jam.
Two were sent to Ben’s school for his apple tasting experiment.
More were sent in school lunches with peanut butter dip.
Seriously at this point I feel like I am the apple queen. I mean, look how many clever and delicious ways I’ve managed to make my family and friends eat all these apples! I was so confident.
Until I looked in the apple orchard bags.
I still had HALF LEFT!
HALF! What was I going to do with another whole bag?
I needed more recipes that used at least 10 apples or more.
I googled my way into making caramel apples next.
I found a recipe with 5 stars that I was sure would be a winner. How could sugar, corn syrup, heavy cream with some salt possibly go wrong?
All you do is boil it all together then let it cool a bit, dip the apples and add toppings. That would be a ton of fun for the older kids!
I got it all set up on this rainy Saturday afternoon…
Doesn’t this look like fun? I couldn’t wait to have a blast with the kids.
Well, unfortunately it was not fun. It was messy, and sticky, and the caramel didn’t stick to the apples. Neither did the toppings. Next thing I knew the kids were upset that they didn’t look like the store bought ones, and I was upset that I couldn’t find a single funny thing about it. Are these the ugliest caramel apples you’ve ever seen or what?
The toppings are on top because I had to keep pulling them off the bottom and sticking them on top.
It reminded me of the last time I tried to work with a melted dipping type topping. Cake pops.
Remind me not to work with melting and dipping things again. I’m terrible at is. | http://www.calisoff.com/2013/10/05/the-apple-queen/ |
A teacher of mine once said that there are no original ideas anymore; it is all just new and different expressions of the same idea. So a smart phone is just a new expression of 2 tin cans and a piece of string. An ipod is a new expression of a pianola. A car a new and different expression of a horse and cart. She said that in the early 90’s; before I had a computer at home. Before having access to ‘ideas’ at my fingertips. We all know we live in the age of the information revolution. We all can find anything we need to know about anything with a search and a click. So are we just re-imagining the same things or is there room for something really new?
I am loving hearing so many stories about young people re-imagining their worlds; coming up with amazing inventions that solve a problem. Here’s a couple of my favs:
A 13 year old boy in Massai invents a solution to lion attacks of cattle:
http://www.ted.com/talks/richard_turere_a_peace_treaty_with_the_lions.html
Next is this one on possible sources of power…I’m not sure of the science behind this, but I love the concept:
http://www.youtube.com/watch?v=L7oMIR_MoH0
As the information age blooms and we are re-aligning our priorities towards new expressions that prioritise communities and connection over material things – well hopefully - I wonder if we are allowing the possibilities of re-creating a platform where we can grow completely new/original ideas? Have we reached an evolutionary stage that could possibly prove my teacher wrong? I for one would like to think that there is renewed potential for original ideas – things that we have never thought of; concepts that we have not even dreamt of. Unless we can imagine a brand new world with brand new ideas, are we doomed to the same same? Capitalism, so-called democracy and a money lead society is clearly not working. As Einstein said “We can’t solve problems by using the same kind of thinking we used when we created them”
Bring on the new ideas...please!
Baz
5/29/2013 10:35:48 pm
Here's an idea: What if everyone on the planet woke up honest in the morning?
Reply
Leave a Reply. | http://www.imaginalhouse.com/merryns-blog/new-ideas-needed-please-merryns-blog |
Technical analysis is a method of analysis used to analyze and (attempt to) predict the future price changes of a financial asset such as stocks, commodities, or currency pairs, only a price history is required.
To do this, various tools are used, such as charts, statistics, and indicators used together to try to get the most likely direction.
Assumptions of technical analysis
- History repeats itself – Since this method of analysis is based on the study of past movements to predict future movements, it can be said that the cornerstone of technical analysis is the assumption that history repeats itself, that is, it is cyclical, especially when it comes to asset prices.
- Market discounts everything – Another of the main assumptions is the idea that the price of a particular asset reflects all relevant factors about that asset. In other words, any fundamental data, such as a company’s financial statements or economic data are already efficiently discounted in the price and as soon as new information is presented it is reflected in the price almost immediately.
- Price moves in trends – In addition to studying past movements, many investors analyze the current price. The latest assumption is that the price moves in trends, either upward or downward in price. It is believed that after a trend is established the future price movement will be in favor of the trend and not against it.
The Basics
To use the technical analysis method, traders arm themselves with a variety of tools to try to convert the indications of this tool set into the most likely direction of the price.
- Price – The price of an asset is the primary factor in technical analysis. The first thing an analyst does is analyze the price and its past movements with the perspective of trying to predict its future movements.
- Time horizons – Another relevant aspect is the time horizon, or timeframe, in which the chart is viewed. In the case of a candlestick chart for example, in a 5-minute timeframe, each candle represents 5 minutes of price movement. If viewed in a 4-hour, 1-day, or even 1-month timeframe, one candle will represent the respective time frame. The use of each timeframe is determined by the investor, who according to his preference, or personal investment style, will use the one most suitable for him. A long-term investor would not benefit from viewing a 5-minute chart, but rather a daily or weekly timeframe, for example.
- Charts – To use these tools you need something to apply them to. In the financial markets, these tools are demonstrated on charts, where the price path is plotted. The three most commonly used chart types are line, bar, and candlestick charts.
Conclusion
Technical analysis turns out to be the study of the price of an asset over time, something that in a concise way turns out to be simply the study of supply and demand. This method of analysis is just a tool to make a possibly better decision, hence it is useful, but it cannot be used as an absolute.
Visit the Disclaimer for more information. | https://theinvestment.pt/en/technical-analysis-principles/ |
Apply Advanced Ancient Wisdom to better yourself, your life, and the world!
- Who are you?
- Why are you here?
- What are your purpose and direction
- How can you be a better person?
- How can you acquire true love, true peace, and true happiness?
Do you find yourself unable to accomplish your goals? Do you need help overcoming obstacles? Clueless about how to better yourself and your life?
Asking these questions is the first step toward finding the answers, and this eye-opening book explains the principle used by the most advanced civilization to ever exist, the Ancient Egyptians. Egyptians possessed extraordinary creative range, profound intellect, and intellectual culture that had attained a startlingly advanced level of development. The Egyptian mind still has the most important influence on our modern civilization.
Whether you want to improve yourself, achieve your goals, realize your purpose, or use your full potential to create your best life, Restoring Maat is the blueprint.
This informative and enlightening guide teaches;
•How to establish peace and balance within yourself
•How to realize and bring forth your Divine Higher Self
•What the most important aspects of your life are and how to improve them
•How to overcome your obstacles and achieve your goals
•How to accomplish more and live your life to the fullest! | https://restoremaat.com/restoring-maat/ |
There are lot of candidate questions that get asked, but there is always one candidate question that most every person vying for a job gets asked near the end of a job interview, and it popped up recently on LinkedIn’s Premium Career Group from an MBA candidate from the Kogod School of Business at American University.
You’ve probably asked it (or been asked it) many times too, and it’s this:
“I don’t have any additional questions. You answered all of mine. What questions do you have for us?”
Let me state for the record that I have a mixed track record of answering candidate questions myself, and my guess is that every hiring manager who has conducted candidate interviews have heard a wide variety of responses to the question.
What questions should candidates be ready to ask you?
But, here’s what the MBA candidate asked about this question on LinkedIn:
Many interviewees become overwhelmed with the interview itself that they forget an interview is also a conversation. It seems that some are stumped by this invitation to ask questions about a prospective employer.
It is clear that this invitation helps interviewers gain more insight into how prepared an applicant is, how much passion they exude, how interested they are, or even what type of things (that) hold value to them.
I’m curious though — What types of questions have you all come prepared to interviews with? How did you adapt when the prepared questions were “already answered”?
For those in HR — Help us understand what you think about an applicant that has nothing more to ask in an interview.
An effective approach to preparing for an interview is that you do research on the role, the company, the history, etc. Could it be possible that a candidate truly does not have any more questions? Is this acceptable?
Tell us more, please!”
This is a great question because of that last point — “Could it be possible that a candidate truly does not have any more questions. Is this acceptable?”
Most hiring managers I know would NOT consider it acceptable if a candidate didn’t have some questions ready to ask when they had a chance, and the more the candidate has prepped and prepared for the interview, the greater the likelihood that they will have a great many questions they would like answers to.
Is NOT having more questions a deal breaker?
But I keep coming back to this notion that “having questions = good candidate to keep talking to” and “not having questions = bad candidate we should eliminate immediately.”
So, here are some of the best answers from some of the nearly 1.1 million members of LinkedIn’s Premium Career Group when it comes to the issue of candidate questions. See if they align in any way in what you believe candidates should ask YOU at the end of their job interview:
- From a leadership consultant in Vancouver, British Columbia — “As a hiring manager, I have generally considered it a negative when a candidate cannot ask any single, reasonably relevant question when invited. An exception might be for an internal interview, where the role parameters and key stakeholders could be very well known to the candidate. As a recruiter, or if dealing with very junior candidates, I might also be less judgmental.”
- From a customer service program manager in Atlanta — “I love asking “Is there anything that you’ve learned about my work experience that you’d like more clarification on?” or “How do you feel my skills fit this position?” or “Do you have any hesitation about my skill set?” There have been times when the interview was incredibly conversational and I really didn’t have any additional questions, but I always like to close by asking next steps.”
- From an executive recruiter in the San Francisco Bay Area — “Recruiters do not want to hear a list of perfectly staged questions. They honestly want to answer any questions you may have. Shift your perspective from “What do recruiters when to hear?” and switch to being “Wildly Curious.” Ask because you are totally curious. Try to avoid shifting the focus. They really do want to know if you have any other questions.”
- From a game programmer and software developer in Seattle — “I had a similar issue but I got around it by actively asking questions during the interview where appropriate. While I didn’t have additional questions to ask at the very end, I got in a lot of questions through the interview. I think it is important to remember that interviews are templated. Even if you asked 100 questions during the interview they will still ask if you have any additional questions at the end.”
- From an HR/payroll systems manager in Nashville –– “Get specific about how the company operates. For example, ask how often does the company reorganize itself. Get specific about the job for which you just interviewed… but twist it somewhat. Ask how many people currently hold the same position, what is the average amount of time any given person stays in that job. Then ask the interviewer his/her opinion about why it is the case.”
Another view: Maybe this is about leaving a good impression
This is all good advice, and there is a lot more of it if you spend some time in the candidate questions thread on LinkedIn’s Premium Career Group. But all of this makes me wonder — what would TA professionals, hiring managers, and interviewers have as advice on this?
Here’s what Al Palumbo wrote about this on Resume-Live.com under the section Why your questions for the employer matter:
No matter what you’ve said in an interview or how great your credentials are, when we sit in a room afterward and discuss which candidates to bring back, the ones who leave the best impression are the ones we remember most.
So don’t make the mistake of thinking “well, I gave great answers already” and therefore you ease up just as the interview draws to a close. This is the time when you can leave them with a feeling that you are someone who is exactly the bright, resourceful, energetic person they want to add to their company.
And so how you ask your questions of them – one of the last things they’ll remember about you – can be as important as the questions themselves.”
Is there a right or wrong answer here?
So, what are YOU looking for when you interview a candidate and end with, “What questions do you have for us? Is there anything you would like to ask?”
When it comes to candidate questions, is there right answer, or a wrong answer here? Is it really the candidate’s big chance to make a powerful impression? Will they ruin their chances of you hiring them if they don’t have something meaningful to ask?
Or, is there a better way for a hiring manager to end a candidate interviewer? Is there something more tangible, more meaningful, or more insightful that should be asked and answered here?
I would love to get some recruiters, hiring managers at TA pros to weigh in candidate questions, and particularly “do you have anything you would like to ask?” Leave your questions in the comments, or send them to me directly at [email protected]. If I get enough, I’ll write another post about them here.
Authors
John Hollon
John Hollon is managing editor at Fuel50, an AI Opportunity Marketplace solution that delivers internal talent mobility and workforce reskilling. You can download the research reports in their Global Talent Mobility Best Practice Research series at Fuel50.
Recruit Smarter
Weekly news and industry insights delivered straight to your inbox. | https://recruitingdaily.com/candidate-questions-one-thing-every-interviewer-usually-asks/ |
Please note that tax, investment, pension and ISA rules can change and the information and any views contained in this article may now be inaccurate.
Apple’s shares didn’t do much at all in response to boss Tim Cook’s launch of iPhone 13 and updated versions of its Watch and iPad, but this seems to be a normal trading pattern for product releases from the tech giant and as much the result of the old formula of ‘buy on the rumour and sell on the fact’ as anything else.
In the six months before prior next-generation product announcements Apple’s shares have risen by an average of nearly 23% but in the three-month and six-month periods afterwards they have advanced by an average of just 3.3% and 9.3% respectively.
|Before announcement||After announcement|
|Announced||Product||Launched||6 months||3 months||3 months||6 months||12 months|
|09-Jan-07||iPhone 1||29-Jun-07||43.8%||30.2%||25.8%||63.7%||39.4%|
|09-Jun-08||iPhone 3G||11-Jul-08||(0.1%)||(5.9%)||(43.9%)||(47.5%)||(19.7%)|
|08-Jun-09||iPhone 3GS||19-Jun-09||55.0%||37.3%||32.7%||40.1%||96.5%|
|07-Jun-10||iPhone 4||24-Jun-10||28.7%||17.3%||8.7%||20.3%||21.3%|
|04-Oct-11||iPhone 4S||14-Oct-11||23.7%||18.1%||(0.5%)||43.4%||49.2%|
|12-Sep-12||iPhone 5||21-Sep-12||16.2%||69.9%||(25.8%)||(35.3%)||(33.2%)|
|17-Sep-13||iPhone 5GS||20-Sep-13||3.4%||12.1%||17.5%||13.1%||51.2%|
|09-Sep-14||iPhone 6||19-Sep-14||29.1%||6.7%||14.1%||8.1%||15.8%|
|09-Sep-15||iPhone 6S||25-Sep-15||(7.0%)||(10.0%)||(5.8%)||(7.9%)||(0.1%)|
|07-Sep-16||iPhone 7||16-Sep-16||9.1%||18.5%||(0.3%)||(8.3%)||51.3%|
|12-Sep-17||iPhone 8||22-Sep-17||18.1%||7.1%||7.9%||12.7%||38.5%|
|12-Sep-17||iPhone 8-plus||22-Sep-17||18.1%||7.1%||7.9%||12.7%||38.5%|
|12-Sep-17||iPhone X||03-Nov-17||8.9%||2.4%||11.4%||4.2%||37.5%|
|13-Sep-18||iPhone XS, XSMax, XR||26-Oct-18||28.2%||19.0%||(22.6%)||(11.0%)||(3.4%)|
|10-Sep-19||iPhone 11||20-Sep-19||25.0%||14.8%||21.1%||31.7%||109.5%|
|13-Oct-20||iPhone 12||23-Oct-20||60.8%||30.0%||5.2%||8.1%|
|14-Sep-21||iPhone 13||24-Sep-21||22.4%||13.5%|
|Average||22.5%||16.9%||3.3%||9.3%||32.8%|
Source: Company accounts, Refinitiv data
However, a lot does ride on the latest iteration of the iPhone and shake-up of Apple’s product range. The firm continues to encounter anti-trust pressure from regulators regarding its App store – and its monster $2.4 trillion market capitalisation means that any slip or loss of earnings momentum could leave shareholders with a problem.
A forward price/earnings ratio of nearly 27 times prices in a lot of future growth – and analysts have pencilled in just 5% sales growth and 2% earnings per share growth for the year to September 2022.
Hard to believe as it may be, this leaves Apple with something to prove, in terms of its ability to keep regulators sweet, persuade customers to upgrade to 5G mobile devices and shareholders that expected 70% surge in earnings per share in the year to September 2021 was not just a one-off, owing to the pandemic, lockdowns and a surge in working from home – sales growth in Mac computers and iPads showed marked signs of a slowdown in the last fiscal quarter, to June.
Source: Company accounts. Financial year to September
Apple has already suffered four induced profit slides in the past decade, and the first three were largely related to the iPhone’s product cycles and its functionality, price points and demand in China (the early stages of the pandemic contributed to the last one). The subsequent share price surge may mean that January 2020’s crunching profit warning is but a distant memory, but it does not mean it cannot happen again in the latest set of product features fail to capture consumers’ imaginations.
Source: Company accounts. Financial year to September
As the market cap suggests bulls of the stock consider this an unlikely development, especially as 5G services and networks continue to roll out and it has not been wise to bet against Apple under the guidance of either Steve Jobs or Tim Cook. Even considering greater regulatory scrutiny there seems little reason to expect app sales to stumble, barring perhaps a nasty recession, and they also make Apple customers sticky and more likely to upgrade their devices in the future, creating a bit of a virtuous circle.
It may therefore take a bigger-picture development to get Apple’s share price momentum to sour. The shares lost 30% of their value in a month between February and March 2020 as the pandemic began to sweep around the world as investors looked to raise cash as best they could and taking profits in Apple was a quick and simple way to do it.
A surprise increase in interest rates could take its toll, given Apple’s growth stock status, as could a strong, inflationary economic recovery – there would less reason to pay a premium earnings multiple for Apple’s growth profile if beaten-down cyclical, value stocks were showing the same if not faster growth on a lower rating.
These articles are for information purposes only and are not a personal recommendation or advice. | https://www.youinvest.co.uk/articles/investmentarticles/232827/muted-market-response-iphone-13-rule-not-exception |
In this example, it will be demonstrated how the Advection equation can be solved on a one-dimensional domain.
where f = au is the advection flux.
Since we are solving the unsteady advection problem, we must specify this in the solver information. We also choose to use a discontinuous flux-reconstruction projection and use a Runge-Kutta order 4 time-integration scheme.
In this example, it will be demonstrated how the Helmholtz equation can be solved on a two-dimensional domain.
where ∇2 is the Laplacian and λ is a real positive constant.
The geometry for this problem is a two-dimensional octagonal plane containing both triangles and quadrilaterals. Note that a mesh composite may only contain one type of element. Therefore, we define two composites for the domain, while the rest are used for enforcing boundary conditions.
For both the triangular and quadrilateral elements, we use the modified Legendre basis with 7 modes (maximum polynomial order is 6).
Only one parameter is needed for this problem. In this example λ = 1 and the Continuous Galerkin Method is used as projection scheme to solve the Helmholtz equation, so we need to specify the following parameters and solver information.
All three basic boundary condition types have been used in this example: Dirichlet, Neumann and Robin boundary. The boundary regions are defined, each of which corresponds to one of the edge composites defined earlier. Each boundary region is then assigned an appropriate boundary condition.
We know that for f = −(λ + 2π2)sin(πx)cos(πy), the exact solution of the two-dimensional Helmholtz equation is u = sin(πx)cos(πy). These functions are defined specified to initialise the problem and verify the correct solution is obtained by evaluating the L2 and Linf errors.
This execution should print out a summary of input file, the L2 and Linf errors and the time spent on the calculation.
Figure 6.1: Solution of the 2D Helmholtz Problem.
The following example demonstrates the application of the ADRsolver for modelling advection dominated mass transport in a straight pipe. Such a transport regime is encountered frequently when modelling mass transport in arteries. This is because the diffusion coefficient of small blood borne molecules, for example oxygen or adenosine triphosphate, is very small O(10−10).
For small diffusion coefficient, ϵ, the transport is dominated by advection and this leads to a very fine boundary layer adjacent to the surface which must be captured in order to get a realistic representation of the wall mass transfer processes. This creates problems not only from a meshing perspective, but also numerically where classical oscillations are observed in the solution due to under-resolution of the boundary layer.
In the following we will numerically solver mass transport in a pipe and compare the calculated mass transfer at the wall with the Graetz-Nusselt solution. The Peclet number of the transport regime under consideration is 750000, which is physiologically relevant.
Since the mass transport boundary layer will be confined to a very small layer adjacent to the wall we do not need to mesh the interior region, hence the mesh consists of a layer of ten prismatic elements over a thickness of 0.036R. The elements progressively grow over the thickness of domain.
The above represents a quadratic polynomial order in the azimuthal and streamwise direction and 4th order polynomial normal to the wall for a prismatic element.
We choose to use a continuous projection and an first-order implicit-explicit time-integration scheme. The DiffusionAdvancement and AdvectionAdvancement parameters specify how these terms are treated.
We integrate for a total of 30 time units with a time-step of 0.0005, necessary to keep the simulation numerically stable.
To compare with the analytical expression we numerically calculate the concentration gradient at the surface of the pipe. This is then plotted against the analytical solution by extracting the solution along a line in the streamwise direction, as shown in Fig. 6.3.
Figure 6.3: Concentration gradient at the surface of the pipe. | http://doc.nektar.info/userguide/4.3.2/user-guidese26.html |
Bucket Moulds bring convenience to the transportation of paint, but how to deal with these abandoned bucket molds after use?
Paint buckets cannot be littered after use, because they belong to construction waste, and paint belongs to chemical products. Throwing them randomly will pollute the environment. It cannot be used cleanly, and it cannot be used to store food to avoid harm to the body.
According to national laws and regulations, waste paint packaging barrels are hazardous wastes and need to be recycled by qualified waste processors. However, the number of waste paint buckets is huge, ranging from 1.5kg to 2k for a single paint bucket, which requires a lot of processing costs in accordance with the treatment of hazardous waste.
The above is how to deal with the waste bucket of the Paint Bucket Mould, for reference only. | http://writersblock.sh/blogs/481/2412/don-t-throw-away-the-used-paint-bucket-molds |
What percent of Households Receive Food Stamps in Norfolk?
The percentage of households receiving food stamps in the last year in Norfolk is 1.3%.
How Many Households in Norfolk Receive Food Stamps?
There are 1097 households out of 87249 in Norfolk receiving food stamps.
Where does Norfolk Rank in Comparison to Other Virginia Cities for HouseHolds Receiving Food Stamps?
Norfolk ranks 183rd out of 362 cities and towns in Virginia for the least number of households receiving food stamps. A smaller numeric rank indicates a lower percentage of homes receive food stamps.
How do other cities in Norfolk city compare to Norfolk when it comes to the percentage of homes receiving Food Stamps?
Cities must have a minimum of 1,000 residents to qualify
|City||Food Stamps||Details|
|
|
Norfolk
|
|
1.3%
|
|
1097 of 87,249 Norfolk households received food stamps within the past year. | https://www.welfareinfo.org/food-stamps/virginia/norfolk |
The Alagnak river, a federally designated Wild and Scenic River that originates in Katmai National Park and Preserve, is a fisherman's paradise and the most popular fishing float trip in the Bristol Bay region. From it's headwaters at Kukaklek or Nonvianuk Lake, it is a 75 mile Class I and II river with one Class III canyon that is a mile long and has a short falls, not easily portaged or lined due to the steep walls. The river is a good family and friends float trip that has fishing as a centerpiece activity. There are some hikes across the tundra of the lakes area and upper river, with tree-lined banks in the middle and lower reaches. It is a popular destination for day fishing for the many lodges in the area but is a wonderful 6-day trip with moderate whitewater and is suitable for rafts and inflatable kayaks for those with paddling experience. This area is filled with Alaskan Brown Bears and is remote wilderness even with the fact you will see other people along the way.
Put In
The put in involves flying a regularly scheduled flight from Anchorage to King Salmon where you can find a charter flight service to fly you by floatplane to one of the two headwater lakes. The 75-mile trip that has the Class III canyon begins out of Kukaklek Lake and is best done in small rafts. If you don't want to have to line or portage your kayaks through the Canyon, start at Nonvianuk Lake and float the 11 miles of Nonvianuk River to where it joins the Kukaklek outflow. Added to the remaining 56 miles gives you a 67-mile trip.
Take Out
The take out is normally by floatplane from the deep-water sections of the last 10 to 30 miles. Your pilots will likely show you where they will want you to be. The last 10 miles of river can be affected by both tide and westerly winds but you could paddle on to the Kvichak River and on down to where it merges into the Kvichak Delta to the abandoned village of Hallersville where there is a gravel strip at the abandoned cannery that is there or at the village of Kvichak.
The Trip
Headwater Lakes to Confluence: 11 or 19 miles
The most common starting point is from Kukalek Lake and is a 19-mile run with the canyon at mile 15. The other starting point is from Nonvianuk Lake and is an 11-mile run without the Class III canyon. In the area of Kukalek Lake, you are in the rolling hills and there are some nice tundra hikes that can be done from this area. Once you exit the lake it runs west first then south where it passes the entrance/outflow of another small lake to the south before finally becoming a swift rocky stream with a moderately paced current at around 3 to 5 mph. This area is dry tundra with nice views of surrounding countryside. After 6 miles the river becomes enclosed by spruce forest and runs through shallow Class II boulder gardens with the current picking up to 6 mph as it winds and turns on its approach to the small canyon that will start at GPS N 59.06 degrees by W 155.75. The river swings from a westerly course to a southerly one as it approaches the canyon. Within a mile there are several Class II rapids with the short "falls' coming rapidly. Kayaks will want to line here and everyone should stop to scout and see that the run has no sweepers or logjams in it. Due to the steep walls, lining is not easy and portaging is even harder so you should be prepared to run this. In rafts this is no problem. Kayakers should have solid paddling skills to keep from tipping and being washed over the falls, which comes quickly. Beyond the canyon is 5 miles of swift but nontechnical water to where the confluence of the two lake streams meet. If you choose to start at Nonvianuk Lake the run is 11 miles of Class II river. The outlet of this lake is one of the most popular fishing spots and will likely be crowded with fishermen and floatplanes. The Nonvianuk River has no Class III canyon to negotiate and is similar in that it flows through the dry tundra hills and ridges to where it meets the outflow of Kukalek Lake.
Confluence through the Braids: 15 miles
From this point the river opens up and heads west in a much bigger valley. For 15 miles the river swings through a braided river channel, where wading and fishing is ideal on the gravel areas near tributary mouths. There are brushy islands and long gravel bars with only Class I water to contend with. The first mile is a single channel but soon the valley widens and gravel bars separate the river into channels. For 3 miles the river glides along with small channels to choose from. There are several major forks where you might be able to go either way, but odds are better when you stay with the main current. For 6 miles the river runs west to where you will see on the left a fishing lodge at GPS N 59.00 degrees by W 156.03 degrees. After 10 miles the river runs a northwesterly course for 2 miles then heads west again. Slowly, the river runs the last bit turning more north with each mile.
Lower River to Kvichak River: 41 miles
The lower 40 miles of river are more enclosed by forest and it meanders among downed trees with brushy banks in a wide river channel. There may likely be powerboats along this section of river as well as semi-permanent tent camps. The river is 75 feet wide at times and the banks are grassy but dry enough for tent sites in most areas. Here, the river runs due north for 2 miles and along here, just through the brush on the left of the river, are miles of dry riverbed from back when the Alagnak was bigger and flowing over there. There are some nice dry walks to go on there, but be watchful of bears. The river braids more and the banks are brushier once the river starts westward again. But after 2 or 3 miles of this, the river becomes more single or double channeled and is deep, clear and full of fish. The bears are numerous anywhere at anytime along here. There is another fishing lodge with an airstrip at GPS N 59.09 degrees by W 156.46 on the left of the river where the river begins turning to the south. The river turns due south for a final time then westward again to where the last 10 miles of river are tidally influenced with slow to no current. This is why it is best to get picked up by floatplane along here rather than floating further. The final 10-mile stretch begins about GPS N 59.03 degrees by W 156.68 degrees. There is an airstrip at the abandoned village of Hallersville at the confluence or at Kvichak a mile and a half down the Kvichak River on the left of the river at GPS N 58.97 degrees by W 156.93. | https://www.alaska.org/detail/alagnak-river |
Departmental publications include content generated by one or more members of the faculty, and production is overseen by the faculty member, group or department.
Production costs are handled through the department account, and the finished product is delivered to the bookstore. We will do a retail markup and sell the book at our store along with regular textbooks. At the end of the semester, the bookstore credits the specified account number for copies sold and returns unsold books to the department.
Deliver master copies to Campus Print and Mail Services three–five weeks before the semester begins to avoid the backlog of syllabi and other material that must be copied at the beginning of each semester. This ensures the books will be ready for your students on time.
If your material is a departmental publication, include the following information on your textbook adoption form. Otherwise, please contact our textbook manager, Craig Thelen, at [email protected] or extension x7381, with the information necessary to get the book to your students as soon as it arrives. | https://hope.edu/offices/bookstore/departmental-publications.html |
One simple graph, the stem-and-leaf graph or stem plot , comes from the field of exploratory data analysis.It is a good choice when the data sets are small. To create the plot, divide each observation of data into a stem and a leaf. The leaf consists of a final significant digit . For example, 23 has stem 2 and leaf 3. Four hundred thirty-two (432) has stem 43 and leaf 2. Five thousand four hundred thirty-two (5,432) has stem 543 and leaf 2. The decimal 9.3 has stem 9 and leaf 3. Write the stems in a vertical line from smallest the largest. Draw a vertical line to the right of the stems. Then write the leaves in increasing order next to their corresponding stem.
For Susan Dean's spring pre-calculus class, scores for the first exam were as follows (smallest to largest):
|Stem||Leaf|
|3||3|
|4||299|
|5||355|
|6||1378899|
|7||2348|
|8||03888|
|9||0244446|
|10||0|
The stem plot shows that most scores fell in the 60s, 70s, 80s, and 90s. Eight out of the 31 scores or approximately 26% of the scores were in the 90's or 100, a fairly high number of As.
The stem plot is a quick way to graph and gives an exact picture of the data. You want to look for an overall pattern and any outliers. An outlier is an observation of data that does not fit the rest of the data. It is sometimes called an extreme value. When you graph an outlier, it will appear not to fit the pattern of the graph. Some outliers are due to mistakes (for example, writing down 50 instead of 500) while others may indicate that something unusual is happening. It takes some background information to explain outliers. In the example above, there were no outliers.
Create a stem plot using the data:
The data are the distance (in kilometers) from a home to the nearest supermarket.
The value 12.3 may be an outlier. Values appear to concentrate at 3 and 4 kilometers.
|Stem||Leaf|
|1||1 5|
|2||3 5 7|
|3||2 3 3 5 8|
|4||0 2 5 5 7 8|
|5||5 6|
|6||5 7|
|7|
|8|
|9|
|10|
|11|
|12||3|
Another type of graph that is useful for specific data values is a line graph . In the particular line graph shown in the example, the x-axis consists of data values and the y-axis consists of frequency points . The frequency points are connected.
In a survey, 40 mothers were asked how many times per week a teenager must be reminded to do his/her chores. The results are shown in the table and the line graph.
|Number of times teenager is reminded||Frequency|
|0||2|
|1||5|
|2||8|
|3||14|
|4||7|
|5||4|
Bar graphs consist of bars that are separated from each other. The bars can be rectangles or they can be rectangular boxes and they can be vertical or horizontal.
The bar graph shown in Example 4 has age groups represented on the x-axis and proportions on the y-axis .
By the end of 2011, in the United States, Facebook had over 146 million users. The table shows three age groups, the number of users in each age group and the proportion (%) of users in each age group. Source: http://www.kenburbary.com/2011/03/facebook-demographics-revisited-2011-statistics-2/
|Age groups||Number of Facebook users||Proportion (%) of Facebook users|
|13 - 25||65,082,280||45%|
|26 - 44||53,300,200||36%|
|45 - 64||27,885,100||19%|
The columns in the table below contain the race/ethnicity of U.S. Public Schools: High School Class of 2011, percentages for the Advanced Placement Examinee Population for that class and percentages for the Overall Student Population. The 3-dimensional graph shows the Race/Ethnicity of U.S. Public Schools (qualitative data) on the x-axis and Advanced Placement Examinee Population percentages on the y-axis . ( Source: http://www.collegeboard.com and Source: http://apreport.collegeboard.org/goals-and-findings/promoting-equity )
|Race/Ethnicity||AP Examinee Population||Overall Student Population|
|1 = Asian, Asian American or Pacific Islander||10.3%||5.7%|
|2 = Black or African American||9.0%||14.7%|
|3 = Hispanic or Latino||17.0%||17.6%|
|4 = American Indian or Alaska Native||0.6%||1.1%|
|5 = White||57.1%||59.2%|
|6 = Not reported/other||6.0%||1.7%|
Go to Outcomes of Education Figure 22 for an example of a bar graph that shows unemployment rates of persons 25 years and older for 2009.
Notification Switch
Would you like to follow the 'Collaborative statistics (with edits: teegarden)' conversation and receive update notifications? | https://www.jobilize.com/online/course/2-2-stem-and-leaf-graphs-stemplots-by-openstax?qcr=www.quizover.com |
Source: The Economist
THE Greek founders of philosophy constantly debated how best to live the good life. Some contended that personal pleasure is the key. Others pointed out that serving society and finding purpose is vital. Socrates was in the latter camp, fiercely arguing that an unvirtuous person could not be happy, and that a virtuous person could not fail to be happy. These days, psychologists tend to regard that point as moot, since self-serving “hedonic” pleasures generate the same sorts of good feelings as those generated by serving some greater “eudaimonic” purpose. However, a study just published in the Proceedings of the National Academy of Sciences, by Barbara Fredrickson, a psychologist at the University of North Carolina, Chapel Hill, and her colleagues suggests Socrates had a point. Though both hedonic and eudaimonic behaviour bring pleasure, the eudaimonic sort also brings health.
Dr Fredrickson, an expert on positive emotions, has long known that happiness benefits health and leads to longer lives. Similarly, she knows that both hedonic and eudaimonic pleasures generate feelings that people describe as “happiness”. A simple syllogism, therefore, suggests happiness does indeed bring health and longevity. But, because of the overlap between the happiness-generating properties of both hedonic and eudaimonic pleasures, she had until she conducted this study found it impossible to determine whether both are able improve physical health and longevity, or whether only one of them can.
To solve the puzzle, she and a team of genomics researchers led by Steven Cole of the University of California, Los Angeles, recruited 84 volunteers for an experiment that examined genes associated with health while simultaneously probing happiness in a way that would tease apart hedonic and eudaimonic well-being. The team interviewed participants over the phone to make sure none suffered from any chronic illness or disability (four were eliminated this way). The rest were given online questionnaires in which they were asked questions that probed their happiness. These included, “In the past week how often did you feel happy?” and, “How often did you feel satisfied?” both of which were intended to assess hedonic well being. To assess eudaimonic well being they asked questions like, “In the past week how often did you feel that your life had a sense of direction or meaning to it?” and “How often did you feel that you had something to contribute to society?” The answers to these questions could score from nought to five points. Nought indicated “never”. Five indicated “every day”. The questionnaires also collected information on participants’ age, sex, race, smoking, alcohol consumption and recent symptoms of minor illness, like headaches and upset stomachs. | https://themuslimtimes.info/2013/08/05/the-right-kind-of-happy/ |
1. Introduction {#sec1}
===============
Breast density (BD) reflects the proportion of fibroglandular tissue in the breast and is one of the strongest independent predictors of breast cancer risk \[[@B1]--[@B4]\]. The most widely used method for measuring BD is the histogram segmentation method (HSM) using mammographic images, as pioneered by Byng et al. \[[@B5]\]. HSM is a user-guided graphic interactive thresholding method that is semiautomatic and computer-assisted but is also time-consuming, labor intensive, and subjective.
Mammography is designed to detect early breast cancer rather than to measure BD, and the radiation dose required for detecting cancer is greater for women with dense breasts. The multiple possible variations in instrument settings can confound the use of mammograms for BD estimates, and for this reason phantoms or step-wedge standards are included for calibration of mammography when measuring volumetric density \[[@B6], [@B7]\]. Individualized imaging parameters are routinely stored in the DICOM header of the mammogram report. We developed a mathematical model (MATH) that uses a substantial number of these individualized imaging parameters to automatically compute BD upon mammogram acquisition, thereby omitting the laborious HSM procedure \[[@B8], [@B9]\]. The full field digital mammography (FFDM) unit also routinely estimates and records percent glandular breast tissue. This estimate is used by the FFDM unit to optimize radiation dose for final screening mammography.
Mammography projects a 3-dimensional (3D) tissue into a 2-dimensional (2D) image. Thus, area measured from a 2D image can be expected to deviate from 3D volumes. Shepherd et al. \[[@B10]\] developed compressible breast phantoms with known and varying breast composition (e.g., 0--80% glandular tissue) which were imaged together with each mammogram. The density in the phantom was then used to calibrate the density in the pixels of a 2D mammogram. This algorithm considers the effect of breast compression on breast density. Using this approach, glandular volume measurements were found to be more strongly associated with breast cancer risk than with glandular area measurements alone \[[@B10]\]. We have shown that total volume (TV), glandular volume (GV), and adipose (fat) volume (FV) of the breast can be easily and reasonably approximated by multiplying the fat and gland tissue areas of the mammogram by the compression thickness of the breast as recorded in the mammogram DICOM header report \[[@B9]\].
The common use of mammography for breast cancer screening is due in part to its low cost. Limitations include a 2D projection of the compressed breast. Due to radiation exposure, mammography is not commonly applied to women less than 45 years old, unless medically indicated. Lack of mammographic imaging data in younger women makes it difficult to assess the role of BD in women of younger age in predicting later-in-life breast cancer risk. Thus, there is increased interest in the use of magnetic resonance imaging (MRI) for acquiring breast images, because it avoids radiation exposure and provides 3D images.
Several feasible MRI protocols for measuring fibroglandular tissue are available and the imaging protocols are typically a variation of clinically used T1 relaxation-rate MRI protocols, with or without fat suppression \[[@B9]\]. Four alternative conceptual approaches for estimating the volume of breast glandular tissue from MRI data have been investigated, namely, (I) segmentation of glandular and fatty tissues by an interactive thresholding algorithm \[[@B11], [@B12]\], (II) use of a clustering algorithm \[[@B13], [@B14]\], (III) a logistic function approach \[[@B15]\], or (IV) a curve-fitting algorithm \[[@B9]\].
We previously showed that breast glandularity measured as percent glandular tissue (%-G) (commonly referred to in the literature as percent breast density), glandular tissue volume (GV), fat volume (FV), and total volume (TV) from mammographic and MRI images were highly correlated with one another by ordinary least square regression (*R* ^2^) and intraclass correlation (ICC) analyses (all correlation coefficients \> 0.75) \[[@B9]\]. Because there is no "gold standard" for measuring breast tissue composition, to further assess the usefulness of these measurement methods, we compared the similarities among patterns of biological predictors of BD measured by two breast images (MRI and mammography) and five breast density estimation methods.
2. Materials and Methods {#sec2}
========================
2.1. Study Design {#sec2.1}
-----------------
The main purpose of this study was to investigate the effects of methods of imaging the breast and measuring BD on biological features that may be associated with BD. BD measures by three new methods (MATH and two MRI methods) and by a FFDM unit were compared to that by a widely used HSM. The two MRI methods were a gradient-echo pulse sequence (3DGRE) and a fat suppressing, fast inversion spin echo pulse sequence (STIR). Data for dependent and independent study variables included only those that could be measured objectively. The study was compliant with HIPAA regulations and was approved by the Institutional Review Board of the University of Texas Medical Branch and the Human Research Protection Office of the US Army Medical Research and Materiel Command. Written informed consent was obtained from all subjects.
Healthy premenopausal women of all major races/ethnicities, living within 80 km of Galveston, Texas, were recruited, using webmail, posted advertisements, and postal mail. Women were 30 to 40 years old with regular monthly menstrual cycles. Subjects who were breast feeding, pregnant, expecting to become pregnant, or had used any type of contraceptive medication (oral, injection, or patch) within the prior 6 months were excluded. Multiple fasting blood samples from two separate menstrual cycles, one screening digital mammogram and two breast MR images, were all obtained during the same or separate luteal phase not more than 3 menstrual cycles apart. Only images of the left breast were analyzed in this study. Anthropometric and reproductive variables were also obtained.
2.2. Main Study Outcomes (Dependent Variables) and Their Measurement Methods {#sec2.2}
----------------------------------------------------------------------------
There were four BD outcomes of interest, %-G, GV, FV, and TV, for multivariate regression model analyses. These were obtained in a sample of 320 women by five methods, three from 2D mammography (HSM, MATH, and FFDM) and two from 3D MRI (3DGRE and STIR). The total breast is readily isolated from surrounding background and tissue on both mammographic and MR images. Mammography generated one image and one total breast area/volume for analysis by HSM, MATH, and FFDM, and MRI generated two images and two total breast volume estimates using 3DGRE and STIR.
2.3. Digital Mammography Methods (HSM, MATH, and FFDM) {#sec2.3}
------------------------------------------------------
We developed software in-house for BD analyses using digital mammograms \[[@B8]\] by applying the HSM algorithm of Byng et al. \[[@B5]\]. Briefly, the unprocessed (raw) and the processed digital mammograms were acquired using a GE Senographe 2000D FFDM unit (General Electric Healthcare Institute, Waukesha, WI). Craniocaudal (CC) and mediolateral-oblique (MLO) views of the left and right breasts were acquired. The raw CC view of the left breast was quantified for total breast area (*T* ~AREA~), fibroglandular area (*G* ~AREA~), fat (adipose) area (*F* ~AREA~), and %-breast density (%-G) \[[@B8]\]. The processed images were not suitable for BD analyses, because the window and level settings varied between mammograms in order to provide sharp contrast between dense and nondense tissues to meet diagnostic needs for detecting breast cancer. However, the raw images allowed us to apply a consistent algorithm for setting the window and level for image viewing and dense tissue segmentation and were used for BD estimation.
Briefly, the breast tissue region of interest (ROI) was isolated from the chest wall and muscle to obtain the total breast area for each mammogram and for generating a signal-intensity histogram of the breast ROI. With the aid of graphical user-interactive software, an analyst subjectively selected suitable signal intensity from the histogram as a threshold that best segmented glandular area (*G* ~AREA~) from fat tissue area (*F* ~AREA~). For the HSM method, total breast area (*T* ~AREA~) is the sum of *G* ~AREA~ and *F* ~AREA~ and %-G is calculated as the ratio of *G* ~AREA~/*T* ~AREA~. This analyst-dependent process took about 30 min.
GV, FV, and TV were the products of the respective tissue mammogram areas, the compression thickness, and a unit correction factor. For the viewing geometry of our imager, the unit correction factor for converting pixel area to mL (or cc) was 9.96, as described previously \[[@B9]\]. The DICOM header report included both preexposure and final exposure compression thickness. Preexposure compression thickness was used to estimate volumes, as follows: $$\begin{matrix}
{\%\text{-G} = \frac{G_{\text{AREA}}}{G_{\text{AREA}} + F_{\text{AREA}}} = \frac{G_{\text{AREA}}}{T_{\text{AREA}}},} \\
{\text{GV} = 9.96 \cdot G_{\text{AREA}} \cdot \text{compression}{\,\,}\text{thickness},} \\
{\text{FV} = 9.96 \cdot F_{\text{AREA}} \cdot \text{compression}{\,\,}\text{thickness},} \\
{\text{TV} = \text{GV} + \text{FV} = 9.96 \cdot T_{\text{AREA}} \cdot \text{compression}{\,\,}\text{thickness}.} \\
\end{matrix}$$
For the MATH method, %-G was computed using the following multivariate regression model equation that included image data from postmenopausal and other premenopausal women not involved in this study \[[@B8], [@B9]\]: $$\begin{matrix}
{\%\text{-G}} \\
{\quad = 481.33 - 0.0057 \cdot \text{preexposure}{\,\,}\text{dose}} \\
{\quad\quad + 1.2305 \cdot \text{preexposure}{\,\,}\text{thickness}} \\
{\quad\quad - 0.094 \cdot \text{radiation}{\,\,}\text{dose}} \\
{\quad\quad + 5.2056 \cdot \text{pre-exposure}{\,\,}\text{kvp}} \\
{\quad\quad - 0.0599 \cdot \text{anatomical}{\,\,}\text{mean}{\,\,}\text{intensity}} \\
{\quad\quad - 0.0192 \cdot \text{Thresh} - 2.0223 \cdot \text{final}{\,\,}\text{exposure}{\,\,}\text{thickness}} \\
{\quad\quad - 0.049 \cdot \text{compression}{\,\,}\text{force}} \\
{\quad\quad - 37220 \cdot \text{detector}{\,\,}\text{sensitivity}} \\
{\quad\quad - 1.9863 \cdot \text{filter}{\,\,}\text{material} + 25.314 \cdot \text{anode}{\,\,}\text{material}.} \\
\end{matrix}$$
All variables in ([2](#EEq1.5){ref-type="disp-formula"}) are used by the digital mammography unit to produce a screening image and are strong and significant predictors of BD. The DICOM tag for each variable for the specific mammographic unit used for this study has been described previously \[[@B8]\]. (Note: the DICOM tags may differ for different scanners.) The data for each imaging variable was retrieved from the mammogram DICOM header. The filter material and anode material were either molybdenum or rhodium, which were coded as 1 or 0, respectively, for calculating %-G. The %-G obtained from ([2](#EEq1.5){ref-type="disp-formula"}) was then used to calculate GV and FV for the MATH method using the following approaches: $$\begin{matrix}
{\text{GV} = \text{TV} \cdot \%\text{-G},} \\
{\text{FV} = \text{TV} \cdot \left( {1 - \%\text{-G}} \right).} \\
\end{matrix}$$
The FFDM unit itself gives an estimate of percent breast density for each mammogram, which is also available from the mammogram DICOM header as "Raddose" and "precompo." Values for Raddose are almost the same as for precompo. Raddose values were used to represent %-G from the FFDM unit for calculating GV and FV, according to ([3](#EEq1.6){ref-type="disp-formula"}).
2.4. Magnetic Resonance Imaging (MRI) Methods (3DGRE and STIR) {#sec2.4}
--------------------------------------------------------------
The 3DGRE and STIR breast MRIs were performed as described previously \[[@B9]\]. Briefly, subjects were scanned in a prone position using a 1.5-Tesla MR scanner (General Electric, Waukesha, WI). The 3DGRE, a gradient-echo pulse sequence, took 3 minutes to be completed, and the imaging parameters were repetition time/echo time (TR/TE) = 5.9/1.4 ms, flip angle = 10°, acquisition matrix size = 256 × 256, reconstruction matrix size = 512 × 512, number of excitation (NEX) = 2, field of view (FOV) = 28--35 cm, and slice thickness = 1.5 mm (interpolated). The STIR protocol, a fat suppressing, fast inversion spin echo pulse sequence, took about 15 minutes to be completed, and the imaging parameters were TR/TE = 6050/12.9 ms, flip angle = 90°, an inversion time of 150 ms, acquisition matrix = 256 × 192, reconstruction matrix = 256 × 256, FOV = 28--35 cm, and slice thickness = 2 mm with 0 gap. The image acquisition was interleaved and repeated three times. After a MRI procedure, a 3D volume-rendered breast model was generated for the left breast ROI from either the 3DGRE or STIR protocol, respectively \[[@B9]\].
2.5. Curve-Fitting and Estimation of Glandular Tissue from Breast MR Images {#sec2.5}
---------------------------------------------------------------------------
Details for the analysis of breast tissue volume in mL or cm^3^ have been described \[[@B9]\]. Briefly the final segmented 3D volume-rendered breast model was used to generate a histogram of MRI voxel signal intensity. The histogram was then used for Gaussian curve-fitting analysis using a commercially available peak-fitting program, PeakFit 4.0 (SyStat Software Inc., San Jose, CA). The curve-analysis estimated the relative distribution of areas under the adipose and glandular breast tissue curves of the histogram, respectively, based on the assumption that breast tissue contained only two compartments, that is, adipose and fibroglandular tissues.
The final segmented 3D volume-rendered breast model was also subjected to volume analysis for the resampled/reconstructed 3D model using GE 3D Advantage Windows Workstation software version 4.1 (GE Healthcare Institute, Waukesha, WI), as follows. The reconstructed voxel size is the size of voxel in mm in both *x*- and *y*-directions. The voxel ratio is the ratio between the size of the voxels in the *z*-direction and in the *x*-direction. The voxel size and the voxel ratio of the reconstructed 3D model were recorded in the model DICOM header report and were retrieved for calculating voxel volume (mm^3^) which is the product of voxel ratio and (reconstructed voxel size)^3^. This approach provided volume in mL (cm^3^) for each breast tissue for direct comparison with volume estimated from mammograms as described above.
2.6. Anthropometrics, Body Composition, and Reproductive Factors {#sec2.6}
----------------------------------------------------------------
Body weight (kg), height (m), body mass index (BMI = kg/m^2^), waist circumference (in cm at the umbilicus), and hip circumference (in cm at the widest point around the buttocks) were obtained. Additionally, total body mass, lean body mass, and fat body mass were measured in duplicate (before and after repositioning), with the subject in a supine position, using dual energy X-ray absorptiometry (DEXA) (Model Discovery A, Model QDR4500A, Hologic, Waltham, MA). Average values of duplicate measurements were used for statistical analyses. Demographic and reproductive information (race, ethnicity, ages of menarche, first pregnancy, last pregnancy, and the number of completed pregnancies) were obtained using a self-administered questionnaire.
2.7. Analyses of Hormones and Blood Chemistries {#sec2.7}
-----------------------------------------------
Multiple fasting venous blood samples, drawn between 8:00 and 10:00 a.m., and between 20 and 24 days after menstrual spotting, were analyzed for 17*β*-estradiol, progesterone, insulin, insulin-like growth factor-I (IGF-I), insulin-like growth factor-II (IGF-II), sex hormone binding globulin (SHBG), and C-reactive protein (CRP). Enzyme-linked immunosorbent assay (ELISA) kits were used for measuring serum CRP (sensitivity 1.6 ng/mL) and SHBG (sensitivity 0.61 nmol/L). Immunoradiometric assays (IRMA) were used to measure serum IGF-I (sensitivity 10 ng/mL) and serum IGF-II (sensitivity 12 ng/mL). Radioactive immunoassay (RIA) kits were used to measure plasma 17*β*-estradiol (sensitivity 7 pg/mL), plasma progesterone (sensitivity 0.1 ng/mL), and serum insulin concentrations (sensitivity 1.3 *μ*IU/mL). All immunoassays were performed using commercially available kits (Diagnostic System Laboratories, Inc., Webster, TX). The intra- and interassay coefficients of variation for all analytes were \<10%. Means of serum hormone concentrations from different study visits were used for statistical analyses.
Numerous fasting serum analytes, including glucose, total cholesterol, high-density lipoprotein cholesterol (HDL), triglycerides, alanine aminotransferase (ALT), aspartate aminotransferase (AST), and alkaline phosphatase (ALP), were measured by a certified hospital clinical laboratory using VITROS 5.1 FS (Ortho-Clinical Diagnostics, Rochester, NY).
2.8. Statistical Analyses {#sec2.8}
-------------------------
Data are presented as means and 95% confidence intervals (95% CI) of the mean for continuous variables and as frequencies for the categorical variables (ethnicity and parity). Main outcomes-of-interest are presented as box plots (SigmaPlot 12, Systat Software, Inc., San Jose, CA).
In a sample of 137 subjects from whom blood chemistries and hormone data were available at the time of statistical analyses, univariate associations between dependent variables (%-G, GV, FV, and TV) and predictor variables were computed. Exploratory multivariate analyses between the dependent variables and predictor variables were performed by the GLMSELECT procedure in SAS (with stepwise, forward LAR and LASSO options) to select the best models with information criterion such as AIC, BIC, and Cp options. Good models will have small values of this criterion to select candidate predictors. GLMSELECT models were run with %-G, GV, FV, and TV as dependent variables together with a block of anthropometric measures (body weight, height, BMI, waist and hip circumference, and fat and lean body mass) or a block of blood chemistry variables (a lipid panel of cholesterol, HDL, LDL, VLDL, and triglycerides, liver enzymes of ALP, ALT, and AST, and hormones). Predictor variables, selected consistently in GLMSELECT models for all outcome variables of interest, were included in the final models. We are not aware of any prior studies examining the relationship between routinely measured blood chemistries and BD. Such relationships were explored in this study in a preliminary fashion because the liver metabolizes ovarian steroids, whole body adiposity affects liver function and breast cancer risk, and predictors of GV are few (for more details, see [Section 4](#sec4){ref-type="sec"}).
All models were adjusted for age and reproductive variables known to influence BD, such as age of menarche and number of completed pregnancies. IGF-I, IGF-II, 17*β*-estradiol, progesterone, SHBG, CRP, and insulin have been studied for association with BD and breast cancer risk, and they were included as predictor variables in the final multivariate models. There was no multicollinearity problem among variables in the final models as indicated by variance inflation factors (all \<5).
The final multivariate model also included methods of measurement of BD as predictor variables and interaction terms between measurement methods and respective predictor variables. We performed similarity test procedures of *β*-estimates across methods of measurement by a deviance test or log likelihood test for comparing the full versus the nested models. Post hoc pairwise comparisons with false discovery rate (FDR) adjustment were used to assess differences \[[@B16]\]. The effects of measurement methods on multivariate regression models were validated in another sample of 320 women from whom demographic, anthropometric, and reproductive variables were available but not blood chemistries or hormones. A significance level of *α* = 0.05 was used in our analyses. The statistical analyses were performed using the SAS statistical software package version 9.2 (SAS Institute, Cary, NC). The scatter plot matrix that included histograms was generated using R software (<http://cran.r-project.org/>, version 3.1.0).
3. Results {#sec3}
==========
The racial/ethnic composition of the study population was 54% non-Hispanic White, 30% Hispanic, and 16% African American. [Table 1](#tab1){ref-type="table"} shows additional relevant characteristics of the subjects that were included in the study. Figures [1(a)](#fig1){ref-type="fig"}--[1(d)](#fig1){ref-type="fig"} show the mean and interquartile box plots of %-G, TV (in mL), GV (in mL), and FV (in mL) measured by five different methods, HSM, the FFDM unit, MATH, 3DGRE, and STIR, as applicable. Figures [2(a)](#fig2){ref-type="fig"}--[2(d)](#fig2){ref-type="fig"} show scatter plot matrices, including histograms (diagonal boxes), for four different BD measures, %-G, TV, GV, and FV, respectively. As shown, Pearson\'s correlation coefficients are high, ranging from 0.76 to 0.99 for pairwise correlation analyses in BD measured by the five methods \[[@B9]\]. Note that the distribution of %-G and GV from the FFDM unit tended to be wider; see box plots in Figures [1(a)](#fig1){ref-type="fig"} and [1(c)](#fig1){ref-type="fig"} and 2nd diagonal box histograms of Figures [2(a)](#fig2){ref-type="fig"} and [2(c)](#fig2){ref-type="fig"}).
The 2D mammography provides breast an area measure. Because fatty breast is more easily compressed than dense breast, this differential compression may bias %-breast density when estimated from mammograms. We correlated the area breast measure from mammograms with the volume measures from 3D MR images. Correlation coefficients of measures using areas with corresponding MRI volumes (from 3DGRE and STIR) were 0.83 for glandular area (*G* ~AREA~), 0.88 for glandular mammographic volume (GV = *G* ~AREA~ × compression thickness), \~0.93 for fatty breast area (*F* ~AREA~), \~0.95 for fatty mammographic breast volume (FV = *F* ~AREA~ × compression thickness), \~0.92 for total mammographic breast area (*T* ~AREA~), and \~0.94 for total mammographic breast volume (TV = *T* ~AREA~ × compression thickness) (results not graphed). Thus, conversion of mammographic area (pixel in mm^2^) to mammographic volume (cm^3^ or mL) resulted in slight improvement of correlation with MRI volumes. Note that the conversion of pixel from mammogram and voxel from breast MRI have all been corrected for viewing geometry of imagers to give mL.
3.1. Effects of Measurement Methods on Quantitative Breast Tissue Composition {#sec3.1}
-----------------------------------------------------------------------------
[Table 2](#tab2){ref-type="table"} shows that mean %-G, TV, GV, and FV values did not differ when compared within the same breast imaging modality, but GV and TV did differ when compared between MRI and mammography measures. Interestingly, mean FV differed significantly only between STIR and HSM or MATH. For %-G, means of 3DGRE differed significantly from each of the three mammographic methods, while mean %-G of STIR did not differ from mean %-G of HSM or MATH but differed from %-G of the FFDM unit. In other words, %-G of the FFDM unit was different from all other %-G measurements.
There is no gold standard for calibrating BD, and the physics behind mammography and MRI differs. Therefore, it is important to know whether correlations with biological factors known to predict breast %-G, GV, FV, and TV are affected by measurement methods.
3.2. Pearson\'s Correlation Analyses Between BD and Biological Features {#sec3.2}
-----------------------------------------------------------------------
The univariate analysis results between dependent and independent variables are shown in [Table 3](#tab3){ref-type="table"}. Pearson\'s correlation coefficients ranged from \>0.2 to 0.8 (*P* \< 0.0001 to 0.01) between %-G, FV, and TV, as measured by five different methods with all anthropometric variables except height, and with HDL, ALP, SHBG, and CRP. A consistent and significant linear correlation was observed only between CRP and GV (measured by HSM, MATH, 3DGRE, and STIR).
3.3. Effects of Measurement Methods on Regression Models of Breast Tissue Composition {#sec3.3}
-------------------------------------------------------------------------------------
The primary objective of our study was to investigate the effects of the five BD measurement methods (HSM, FFDM, MATH, 3DGRE, and STIR) on profiles of biological predictors of %-G, GV, FV, and TV.
Exploratory models were run to select strong predictors for inclusion in final multivariate models. Fat body mass, BMI, and waist-to-hip ratio were most frequently selected as predictor anthropometric variables by PROC GLMSELECT in the exploratory models. Due to strong collinearity, BMI, fat body mass, and waist-to-hip ratio were tested one at a time in the multivariate models. BMI was included in the final models, but it can be replaced by fat body mass with minimal change in the profiles and strength of significant independent predictors, that is, in terms of *β*-estimates, *P* values, and model *R* ^2^. In the sample of 137 subjects from whom blood chemistries were available at the time of statistical modeling, HDL, total cholesterol, ALP, ALT, and AST were most frequently represented by PROC GLMSELECT as significant predictor variables from blood chemistries. However, total cholesterol, HDL, and insulin were not significant independent predictors in multivariate models that included BMI or fat body mass and therefore were removed from the final regression models.
Predictor variables for BD, included in the final multivariate models, were BMI, age, age of menarche, and number of completed pregnancies (*n* = 320). Additionally, in a subset of 137 subjects, hormones (ALP, ALT, AST, SHBG, CRP, IGF-I, IGF-II, 17*β*-estradiol, and progesterone) and blood chemistries were included. [Table 4](#tab4){ref-type="table"} shows, within multivariate models on the subset of 137 subjects, standardized *β*-estimates and standard errors (SE) of the estimates for %-G, GV, FV, and TV, respectively, using HSM as a reference for the measurement method, while [Table 5](#tab5){ref-type="table"} shows the results for the sample of 320 women from whom levels of blood chemistries and hormones were not yet analyzed.
3.4. Effects of Methods of Measurement on Predictors of Breast Composition {#sec3.4}
--------------------------------------------------------------------------
Within each multivariate analysis, an interaction term for each predictor variable with measurement methods was also included. All of the interaction terms between measurement methods and biological predictor variables by deviance or likelihood ratio tests were not significant (e.g., all *P* values were between 0.20 and 1.00), so the interaction terms were removed from the final multivariate regression models. [Table 4](#tab4){ref-type="table"} shows the multivariate regression models using HSM as reference for the measurement method.
The first nested model within the multivariate model for %-G ([Table 4](#tab4){ref-type="table"}) showed a significant association between %-G and BMI (*P* \< 0.0001), number of completed pregnancies (*P* = 0.02), ALT (*P* = 0.02), AST (*P* = 0.001), progesterone (*P* = 0.04), and African-American race (*P* \< 0.05). These associations were independent of BD measurement methods. The aggregate model *R* ^2^ for %-G was 0.54. The second regression model in [Table 4](#tab4){ref-type="table"} shows that fibroglandular tissue volume (GV) was significantly associated with number of completed pregnancies (*P* = 0.0004), AST (*P* = 0.05), CRP (*P* = 0.04), and progesterone (*P* = 0.02). Again, these associations were not affected by BD measurement methods. The aggregate model *R* ^2^ for GV was 0.29. The third model in [Table 4](#tab4){ref-type="table"} shows that the adipose breast tissue (FV) had a significant association with BMI (*P* \< 0.0001), ALP (*P* = 0.04), and IGF-II (*P* = 0.004) that was also not affected by BD measurement methods. The aggregate model *R* ^2^ for FV was 0.71. The last model in [Table 4](#tab4){ref-type="table"} shows that the total breast volume (TV), as measured by digital mammography and two MRI protocols, was significantly associated with BMI (*P* \< 0.0001), number of completed pregnancies (*P* = 0.01), and IGF-II (*P* = 0.02). This association was also not affected by BD measurement methods. The aggregate model *R* ^2^ for TV was 0.65. The strong and significant association between BD and anthropometric and reproductive variables found in the sample of 137 subjects was confirmed in the larger sample of 320 women ([Table 5](#tab5){ref-type="table"}) from whom blood analytes were not available at the time of regression model analyses.
4. Discussion {#sec4}
=============
We recently demonstrated that two mammography (HSM and MATH) and two MRI-based modalities (3DGRE and STIR) could reliably measure breast tissue composition (i.e., %-G, GV, FV, and TV), in that all intraclass correlation and regression coefficient values were \>0.75 \[[@B9]\]. Because there is no gold standard for*in vivo* measurement of breast tissue content, and there are quantitative differences in estimates ([Table 2](#tab2){ref-type="table"}) that may be due possibly to differences in radiologic imaging techniques, 2D and 3D image acquisition, or tissue segmentation methods, it is important to determine if the measurement methods have any influence on correlations with known determinants of BD. In this study, we show that biological predictors of BD in a sample of 30- to 40-year-old premenopausal women were strikingly similar across all five BD measurement methods, between two different radiologic imaging modalities, and were similar to those reported in older women \[[@B17]--[@B20]\]. Our results (Tables [4](#tab4){ref-type="table"} and [5](#tab5){ref-type="table"}) suggest inference validity. Because the MATH method can compute BD automatically upon mammogram acquisition, it should be tested and validated further in future BD and breast cancer risk prediction studies in light of increasing use of digital mammography for breast cancer screening.
The strong predictors of %-G, FV, and TV found in our sample of younger women (30 to 40 years old) were, in general, in line with those reported in older women. Briefly, whole body adiposity is predictive for breast tissue adiposity, and it explained the major portion of the variances found in %-G, FV, and TV \[[@B17]--[@B20]\]. In contrast to adiposity being a dominant predictor of %-G, FV, and TV, few strong predictors of fibroglandular tissue (GV) volume were reported. Neither BMI nor other anthropometric variables were associated with GV in our study of premenopausal women or in older women from other studies \[[@B19]--[@B21]\].
Parity has been consistently reported to be negatively associated with GV \[[@B22]--[@B24]\], which was confirmed in this study by both mammography and MRI images. The strength of the negative association between parity and glandular tissue is not surprising and has been attributed to the glandular tissue remodeling known to occur after each pregnancy and lactation \[[@B25]\]. However, the negative association between parity and TV or FV of the breast ([Table 4](#tab4){ref-type="table"}) is unexpected in multivariate models controlling for BMI (Tables [4](#tab4){ref-type="table"} and [5](#tab5){ref-type="table"}) or total body fat (results not shown). Thus, the decrease in fibroglandular tissue volume in breast after each pregnancy and lactation was not accompanied by a corresponding increase in breast fat/adipose tissue volume, as has often been speculated in the literature \[[@B26]\]. This interesting finding requires further confirmation by other investigators. However, parity explained only a small percentage of the variance in GV. We also explored predictors of GV in routinely measured blood chemistries and hormones.
SHBG and CRP correlated strongly with %-G, TV, GV, and FV in correlation analyses ([Table 3](#tab3){ref-type="table"}) but were not independent predictors of %-G, TV, or FV in multivariate models when adjusted for fat body mass or BMI. This is consistent with reports showing that SHBG predicts %-G and GV~,~but not after adjustment for BMI \[[@B27], [@B28]\]. This can be explained by our previous finding that anthropometric variables are independent predictors of SHBG and CRP \[[@B29], [@B30]\]. Circulating CRP, however, remained a strong and positive independent predictor for GV across all five methods of measurement after adjusting for fat body mass and BMI. Mammary gland involution and remodeling involve components of wound healing \[[@B31], [@B32]\]. CRP, being a marker of inflammation, may play a role in remodeling as its presence has been reported in nipple aspirate fluid \[[@B33]\]. However, CRP has not been associated with breast cancer risk in epidemiologic studies \[[@B34]--[@B36]\] even though inflammation also plays an important role in breast cancer risk \[[@B37]\].
Obesity and the metabolic syndrome have been implicated in breast cancer risk \[[@B38]\]. Liver enzymes, such as ALP, AST, and ALT, are clinically useful markers for the metabolic syndrome and other obesity-related conditions \[[@B39], [@B40]\]. These enzymes were predictors of breast composition in our exploratory GLMSELECT models, but not in the final models including BMI for FV and TV. AST remained an independent predictor for both GV and %-G. The mechanisms underlying the direct association of AST with GV and %-G need further studies. The association between progesterone, estradiol, IGF-I, and IGF-II with BD was also not affected by BD measurement methods. This lack of association was consistent with some but not all literature reports \[[@B27], [@B28], [@B41]\].
CRP, AST, progesterone, and the number of completed pregnancies are more strongly associated with amounts of glandular tissue than with breast adipose tissue. Fat body mass, ALT, and IGF-II appear to be more associated with the amount of breast adipose rather than with glandular tissue. Associations between CRP, ALT, and AST and breast tissue composition have not been reported previously, to our knowledge, and further studies will be necessary to illuminate the mechanisms involved.
The strengths of this study were the inclusion of a population of multiethnic, premenopausal subjects with strictly defined characteristics who were not using exogenous hormones. All study samples were obtained during luteal phases within a short interval. Mean levels of hormones and blood chemistries from multiple blood samples were used for statistical analyses. To our knowledge, no other studies have validated biological features predicting BD as measured by both mammography and MRI in the same study subjects. Weaknesses of the study include a relatively small number of subjects with available measures of blood chemistries and hormones, a narrow age range for inclusion, and the exclusion of postmenopausal women and breast cancer patients, thereby limiting inferences. The parameter fit coefficients for the MATH equation, while applicable for postmenopausal women as previously described \[[@B8], [@B9]\], may require calibration for different brands and models of full field digital mammographic units.
In summary, we found similarities among determinants of breast %-G, GV, FV, and TV measured by five different methods. Our results suggest that the two MRI protocols and the mathematical algorithm that we developed should be further tested in studies of risk factors related to BD and breast cancer. Importantly, the MATH method was able to adjust for the inherent manipulation of imaging parameters by the mammography unit. Whether MATH algorithm improves risk prediction studies of breast density or breast cancer risk deserves further study as it can automatically compute BD upon mammogram acquisition. The two MRI protocols are complimentary in image acquisition for adipose and gland tissue. The sensitivity and specificity of these methods in measuring the effects of interventions that may reduce breast density and breast cancer risk require further study.
This research was supported by the US Army Medical Research and Materiel Command under DADM17-01-1-0417. (The content of the information does not necessarily reflect the position or the policy of the Government and no official endorsement should be inferred. The US Army Medical Research Acquisition Activity, 820 Chandler Street, Fort Detrick, MD, is the awarding and administering acquisition office.) The research was also supported, in part, by Grants from the National Institute of Health (NIH) R01 CA95545 and CA65628. This study was conducted with the support of the Institute for Translational Sciences at the University of Texas Medical Branch, supported in part by a Clinical and Translational Science Award (UL1TR000071) from the National Center for Advancing Translational Sciences, NIH, and by NIEHS 2 P30 ES06676. The study is registered at <http://www.clinicaltrials.gov/> and the identifier is [NCT00204477](http://www.clinicaltrials.gov/ct2/results?term=NCT00204477) and [NCT00204490](http://www.clinicaltrials.gov/ct2/results?term=NCT00204490). The authors wish to acknowledge the technical assistance of the staff of the Breast Imaging Clinic at the University of Texas Medical Branch and the nursing staff of the Institute of Translational Sciences-Clinical Research Center (ITS-CRC). Also, they are also very grateful to Dr. Marinel M. Ammenheuser for critical review of the paper. Image data were archived by the UTMB ITS-CRC Informatics Core.
BD:
: Breast density
HSM:
: Histogram segmentation method
MATH:
: Mathematical algorithm
FFDM:
: Full field digital mammography
MRI:
: Magnetic resonance imaging
%-G:
: %-Glandular tissue or %-breast density
GV:
: Glandular/fibroglandular breast volume
FV:
: Fat/adipose breast volume
TV:
: Total breast volume
3DGRE:
: a 3D gradient-echo MRI pulse sequence
STIR:
: A fat suppressing short tau inversion recovery MRI pulse sequence
TE:
: Echo time
TR:
: Repetition time
TI:
: Inversion time
FOV:
: Field of view
NEX:
: Number of excitations
BMI:
: Body mass index
ALT:
: Alanine aminotransferase
AST:
: Aspartate aminotransferase
ALP:
: Alkaline phosphatase
SHBG:
: Sex hormone binding globulin
CRP:
: C-reactive protein
IGF-I:
: Insulin-like growth factor I
IGF-II:
: Insulin-like growth factor-II.
Conflict of Interests
=====================
The authors declare that there is no conflict of interests regarding the publication of this paper.
Authors\' Contribution
======================
Lee-Jane W. Lu had overall responsibility for the conception, design, and management of the study and acquisition, analyses, and interpretation of the data for presentation and publications and has given final approval of the version to be published. Fatima Nayeem contributed to study management, including sample and data acquisition, quality-control, laboratory analyses, statistical analyses, interpretation of the data, preparation of the paper, and approval of the version to be published. Hyunsu Ju had overall responsibility for design and conduct of statistical model analyses and interpretation of the data, contributed to manuscript preparation, and approved the version to be published. Donald G. Brunder had overall responsibility for the conception of bioinformatics infrastructure for retrieving and archiving radiologic imaging data, designed and developed software for BD analyses, and was involved in data interpretation and approval of the version to be published. Karl E. Anderson had overall responsibility for conception and design of the clinical aspect of the study, interpreted clinical data and outcomes of interest, and has contributed to and approved the version to be published. Manubai Nagamani had overall responsibility for conception, design, acquisition, and interpretation of the reproductive endocrine aspect of clinical and research data and has participated in and approved the version to be published. Tuenchit Khamapirad had overall responsibility for conception, design, acquisition, and interpretation of mammography and magnetic resonance images of the breast, and design of BD analyses and has approved the version to be published. Dr. Raleigh F. Johnson, Dr. Jr., Dr. and Thomas Nishino developed the MRI image analysis methods. Rett Hutto performed the density analyses and was instrumental in finalizing the BD analysis protocol. Katrina Jencks performed many of the histogram segmentation analyses. Additionally, Mouyong Liu developed the research database. Xin Ma performed serum hormone assays. Their contributions to the project are greatly appreciated.
{ref-type="fig"}.](IJBC2014-961679.001){#fig1}
{ref-type="fig"}.](IJBC2014-961679.002){#fig2}
######
General characteristics of the study subjects (*n* = 137).
n (%, column)
------------------------------------------------ ----------------------
Race/ethnicity
White 74 (54%)
Hispanic 41 (30%)
Black 22 (16%)
Mean (95% CI)
Demographics and anthropometrics
Age, y 35.9 (35.4, 36.4)
Weight, kg 74.8 (72.3, 77.4)
Height, cm 161.6 (160.4, 162.7)
BMI, kg/m^2^ 28.7 (27.8, 29.7)
Fat body mass, kg 28.2 (26.5, 29.9)
Lean body mass, kg 46.9 (45.8, 48.0)
Waist circumference, cm 87.3 (85.3, 89.4)
Hip circumference, cm 109.7 (107.7, 111.8)
Reproductive history
Age at menarche, y 12.5 (12.2, 12.8)
Age at first birth, y 23.3 (22.5, 24.2)
Years since last pregnancy 7.3 (6.4, 8.1)
Pregnancy, completed
Zero 18 (13.1%)
One 17 (12.4%)
Two 44 (32.1%)
Three and more 58 (42.3%)
Blood chemistry and hormones
Triglycerides, mg/dL 110.2 (97.4, 123)
Cholesterol, mg/dL 178.6 (173.7, 183.6)
HDL, mg/dL 53.1 (51, 55.2)
Alkaline phosphatase (ALP), U/L 70.6 (67.6, 73.7)
Alanine aminotransferase (ALT), U/L 26.9 (25.2, 28.6)
Aspartate aminotransferase (AST), U/L 21.1 (19.9, 22.3)
Sex hormone binding globulin (SHBG), nmol/L 101.9 (94.9, 108.9)
C-reactive protein (CRP), mg/L 6.8 (5.5, 8.1)
Insulin, *µ*IU/mL 12.6 (11, 14.2)
Insulin-like growth factor I (IGF-I), ng/mL 291.6 (272.4, 310.7)
Insulin-like growth factor II (IGF-II), ng/mL 865.1 (824.7, 905.5)
17*β*-Estradiol, pg/mL 132.2 (125.6, 138.9)
Progesterone, ng/mL 10.1 (9.2, 10.9)
######
Mean differences and 95% confidence interval in percent glandular tissue (%-G), gland volume (GV), fat volume (FV), and total breast volume (TV) by Tukey\'s test.
Methods compared %-G GV (mL) FV (mL) TV (mL)
-------------------- ---------------------- ------------------------ ----------------------- ------------------------
MATH versus HSM^a^ 1.1 (−2.85, 4.94)^b^ 11.4 (−32.52, 55.26) 10.7 (−55.57, 77.06) 0 (−75.98, 75.98)
STIR versus 3DGRE 2.9 (−1.00, 6.81) 22.3 (−21.65, 66.29) 24.3 (−42.17, 90.71) 1.9 (−74.32, 78.21)
3DGRE versus HSM 4.5 (0.56, 8.37)∗ 94.1 (50.09, 138.04)∗ 51.9 (−14.54, 118.34) 146.0 (69.7, 222.23)∗
3DGRE versus MATH 5.5 (1.60, 9.42)∗ 105.4 (61.38, 149.49)∗ 41.2 (−25.40, 107.71) 146.0 (69.7, 222.23)∗
3DGRE versus FFDM 11.5 (7.57, 15.38)∗ 138.0 (94.05, 181.99)∗ 7.9 (−58.49, 74.38) 146.0 (69.7, 222.23)∗
STIR versus HSM 1.6 (−2.22, 5.34) 71.8 (27.94, 115.56)∗ 76.2 (9.97, 142.36)∗ 147.9 (71.93, 223.90)∗
STIR versus MATH 2.6 (−1.29, 6.50) 83.1 (39.23, 127.01)∗ 65.4 (−0.89, 131.74)∗ 147.9 (71.93, 223.90)∗
STIR versus FFDM 8.6 (4.68, 12.46)∗ 115.7 (71.89, 159.51)∗ 33.2 (−98.40, 33.98) 147.9 (71.93, 223.90)∗
FFDM versus HSM 7.0 (3.12, 10.90)∗ 44.0 (0.14, 87.76)∗ 44 (−22.24, 110.15) 0 (−75.98, 75.98)
FFDM versus MATH 6.0 (2.07, 9.86)∗ 32.6 (−11.31, 76.47) 33.2 (−33.20, 99.53) 0 (−75.98, 75.98)
^a^HSM, histogram segmentation method; FFDM, full field digital mammography unit; MATH, mathematical algorithm; 3DGRE, 3D gradient echo; STIR, short tau inversion recovery.
^b^Mean (95% confidence interval).
∗Difference between means, significance at *P* ≤ 0.05 with false discovery rate.
######
Pearson\'s correlation coefficients between dependent and selected independent variables of study population (*n* = 137).
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Variables Pearson\'s correlation coefficients
------------------------------ -------------------------------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- -------- -------- -------- ---------- -------- -------- ---------- ---------- ---------- ---------- ---------- ----------
Age −0.06\ −0.10\ −0.09\ −0.11\ −0.07\ −0.13\ −0.12\ −0.06\ −0.08\ −0.14\ −0.13\ −0.13\ −0.14\ −0.16\ −0.13\ −0.08\ −0.09\ −0.04\ −0.09\ −0.03\ −0.06\
0.49 0.23 0.30 0.19 0.44 0.14 0.16 0.47 0.35 0.10 0.14 0.14 0.10 0.07 0.14 0.33 0.30 0.61 0.30 0.73 0.52
Age at menarche 0.12\ 0.09\ 0.11\ 0.08\ 0.11\ −0.10\ −0.07\ −0.07\ −0.07\ 0.07\ 0.05\ 0.00\ −0.01\ 0.01\ 0.05\ −0.13\ −0.10\ −0.06\ −0.08\ −0.08\ −0.09\
0.16 0.30 0.21 0.37 0.21 0.26 0.43 0.39 0.39 0.39 0.53 0.96 0.91 0.88 0.55 0.13 0.24 0.47 0.36 0.36 0.29
Weight −0.59\ −0.57\ −0.55\ −0.62\ −0.61\ 0.64\ 0.65\ 0.75\ 0.74\ −0.08\ 0.04\ −0.20\ 0.16\ −0.02\ 0.09\ 0.72\ 0.75\ 0.74\ 0.73\ 0.78\ 0.76\
\<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 0.35 0.63 0.02 0.07 0.80 0.31 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001
Height 0.06\ 0.10\ 0.04\ 0.11\ 0.13\ −0.16\ −0.14\ −0.13\ −0.12\ −0.09\ −0.10\ −0.03\ −0.12\ −0.04\ −0.04\ −0.14\ −0.12\ −0.12\ −0.12\ −0.13\ −0.12\
0.52 0.23 0.61 0.21 0.13 0.06 0.11 0.13 0.15 0.30 0.26 0.74 0.17 0.67 0.64 0.09 0.16 0.18 0.15 0.14 0.16
BMI −0.61\ −0.61\ −0.57\ −0.66\ −0.66\ 0.70\ 0.70\ 0.80\ 0.78\ −0.04\ 0.08\ −0.18\ 0.20\ 0.00\ 0.11\ 0.77\ 0.79\ 0.78\ 0.78\ 0.83\ 0.80\
\<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 0.61 0.35 0.03 0.02 0.96 0.22 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001
Fat body mass −0.61\ −0.62\ −0.60\ −0.66\ −0.67\ 0.69\ 0.72\ 0.82\ 0.81\ −0.04\ 0.11\ −0.21\ 0.23\ 0.03\ 0.12\ 0.76\ 0.80\ 0.80\ 0.79\ 0.85\ 0.83\
\<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 0.64 0.21 0.02 0.01 0.75 0.18 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001
Lean body mass −0.44\ −0.36\ −0.35\ −0.41\ −0.38\ 0.42\ 0.44\ 0.48\ 0.47\ −0.09\ 0.00\ −0.10\ 0.13\ −0.02\ 0.08\ 0.48\ 0.51\ 0.47\ 0.48\ 0.50\ 0.48\
\<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 0.31 0.96 0.23 0.13 0.84 0.34 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001
Waist circumference −0.60\ −0.60\ −0.59\ −0.67\ −0.66\ 0.69\ 0.69\ 0.78\ 0.76\ −0.04\ 0.08\ −0.19\ 0.17\ −0.02\ 0.09\ 0.76\ 0.77\ 0.76\ 0.77\ 0.82\ 0.79\
\<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 0.62 0.38 0.03 0.05 0.78 0.29 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001
Hip circumference −0.55\ −0.51\ −0.50\ −0.58\ −0.56\ 0.59\ 0.62\ 0.71\ 0.70\ −0.07\ 0.06\ −0.14\ 0.18\ 0.00\ 0.10\ 0.66\ 0.70\ 0.67\ 0.68\ 0.74\ 0.72\
\<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 0.40 0.47 0.10 0.04 0.99 0.23 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001
Triglycerides −0.13\ −0.15\ −0.18\ −0.24\ −0.19\ 0.16\ 0.21\ 0.20\ 0.19\ 0.07\ 0.13\ 0.04\ 0.11\ 0.04\ 0.10\ 0.15\ 0.19\ 0.18\ 0.21\ 0.20\ 0.17\
0.13 0.08 0.04 0.01 0.02 0.05 0.02 0.02 0.03 0.44 0.14 0.62 0.21 0.66 0.23 0.07 0.02 0.04 0.01 0.02 0.04
Cholesterol −0.06\ −0.06\ −0.06\ −0.08\ −0.11\ 0.12\ 0.14\ 0.11\ 0.10\ 0.07\ 0.10\ 0.08\ 0.13\ 0.08\ 0.05\ 0.11\ 0.12\ 0.09\ 0.12\ 0.10\ 0.10\
0.49 0.46 0.47 0.36 0.19 0.15 0.11 0.21 0.24 0.43 0.24 0.35 0.14 0.37 0.54 0.21 0.16 0.31 0.16 0.27 0.27
HDL 0.36\ 0.41\ 0.40\ 0.43\ 0.39\ −0.28\ −0.29\ −0.33\ −0.32\ 0.11\ 0.02\ 0.20\ 0.01\ 0.11\ 0.02\ −0.34\ −0.35\ −0.39\ −0.36\ −0.37\ −0.35\
\<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 0.001 0.001 \<0.0001 0.0001 0.20 0.81 0.02 0.95 0.19 0.81 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001
Alkaline phosphatase −0.22\ −0.29\ −0.26\ −0.34\ −0.35\ 0.37\ 0.42\ 0.46\ 0.44\ 0.10\ 0.18\ −0.05\ 0.18\ 0.09\ 0.11\ 0.36\ 0.41\ 0.43\ 0.43\ 0.45\ 0.44\
0.01 0.001 0.002 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 0.25 0.03 0.57 0.03 0.29 0.21 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001
Alanine aminotransferase −0.09\ −0.11\ −0.08\ −0.11\ −0.11\ 0.15\ 0.13\ 0.13\ 0.13\ 0.05\ 0.06\ −0.02\ 0.06\ 0.05\ 0.06\ 0.14\ 0.13\ 0.13\ 0.13\ 0.13\ 0.12\
0.28 0.19 0.39 0.19 0.19 0.08 0.14 0.13 0.14 0.57 0.51 0.86 0.51 0.58 0.51 0.09 0.14 0.13 0.12 0.15 0.16
Aspartate aminotransferase 0.12\ 0.09\ 0.14\ 0.08\ 0.10\ −0.04\ 0.01\ −0.04\ −0.04\ 0.12\ 0.15\ 0.14\ 0.14\ 0.14\ 0.12\ −0.08\ −0.05\ −0.07\ −0.05\ −0.07\ −0.07\
0.17 0.31 0.11 0.33 0.25 0.68 0.92 0.68 0.66 0.18 0.09 0.11 0.11 0.11 0.15 0.35 0.57 0.42 0.57 0.43 0.42
Sex hormone binding globulin 0.28\ 0.33\ 0.32\ 0.35\ 0.36\ −0.41\ −0.40\ −0.43\ −0.43\ −0.10\ −0.15\ 0.02\ −0.16\ −0.07\ −0.17\ −0.41\ −0.40\ −0.39\ −0.42\ −0.43\ −0.42\
0.0010 \<0.0001 0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 0.27 0.07 0.83 0.06 0.42 0.05 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001
C-reactive protein −0.22\ −0.31\ −0.23\ −0.36\ −0.35\ 0.50\ 0.56\ 0.60\ 0.59\ 0.20\ 0.30\ 0.03\ 0.35\ 0.21\ 0.26\ 0.46\ 0.53\ 0.52\ 0.53\ 0.57\ 0.57\
0.01 0.0002 0.01 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 0.02 0.0003 0.70 \<0.0001 0.02 0.0019 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001
Insulin −0.20\ −0.26\ −0.18\ −0.28\ −0.28\ 0.39\ 0.43\ 0.46\ 0.45\ 0.15\ 0.23\ 0.04\ 0.26\ 0.15\ 0.22\ 0.37\ 0.41\ 0.39\ 0.40\ 0.44\ 0.43\
0.02 0.003 0.04 0.001 0.001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 0.08 0.01 0.68 0.00 0.09 0.01 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001 \<0.0001
IGF-I 0.15\ 0.21\ 0.21\ 0.22\ 0.28\ −0.22\ −0.18\ −0.24\ −0.24\ −0.05\ −0.06\ 0.06\ −0.05\ 0.00\ 0.02\ −0.22\ −0.19\ −0.21\ −0.21\ −0.24\ −0.26\
0.09 0.02 0.01 0.01 0.001 0.01 0.03 0.01 0.004 0.57 0.51 0.52 0.56 0.97 0.84 0.01 0.02 0.01 0.01 0.004 0.002
IGF-II −0.12\ −0.11\ −0.16\ −0.20\ −0.18\ 0.24\ 0.25\ 0.26\ 0.25\ 0.07\ 0.10\ 0.02\ 0.09\ 0.03\ 0.02\ 0.23\ 0.25\ 0.23\ 0.27\ 0.26\ 0.26\
0.17 0.19 0.07 0.02 0.03 0.004 0.004 0.003 0.004 0.40 0.26 0.84 0.31 0.72 0.83 0.01 0.003 0.01 0.00 0.002 0.002
17*β*-Estradiol 0.06\ 0.19\ 0.12\ 0.13\ 0.12\ −0.08\ −0.04\ −0.09\ −0.10\ 0.05\ 0.08\ 0.23\ 0.12\ 0.16\ 0.14\ −0.11\ −0.08\ −0.17\ −0.09\ −0.13\ −0.14\
0.51 0.03 0.16 0.14 0.16 0.33 0.67 0.28 0.23 0.55 0.32 0.01 0.17 0.07 0.11 0.20 0.37 0.05 0.28 0.13 0.10
Progesterone 0.12\ 0.14\ 0.15\ 0.11\ 0.16\ −0.12\ −0.06\ −0.09\ −0.09\ 0.03\ 0.06\ 0.13\ 0.07\ 0.08\ 0.10\ −0.14\ −0.10\ −0.14\ −0.11\ −0.11\ −0.12\
0.15 0.10 0.09 0.20 0.07 0.15 0.45 0.28 0.31 0.75 0.46 0.13 0.41 0.38 0.25 0.09 0.24 0.11 0.19 0.18 0.17
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
######
Multivariate analysis model estimates for percent breast density (%-G), fibroglandular tissue volume (GV), fat tissue volume (FV), and total breast volume (TV) measured by five different methods (*n* = 137).
Explanatory variable Standardized *β*-estimates (SE)
------------------------------------------ --------------------------------- ----------------- ---------------- ----------------
BMI −0.59 (0.09)∗∗∗ −0.10 (0.11) 0.65 (0.07)∗∗∗ 0.52 (0.08)∗∗∗
Age −0.09 (0.07) −0.13 (0.08) −0.09 (0.05) −0.12 (0.06)∗
Age at menarche 0.13 (0.07) 0.13 (0.08) −0.05 (0.05) 0.01 (0.06)
Pregnancy, completed
Zero Reference
One 0.22 (0.26) −0.29 (0.32) −0.28 (0.21) −0.34 (0.23)
Two −0.52 (0.21)∗ −0.81 (0.27)∗∗ −0.08 (0.17) −0.34 (0.19)
Three and more −0.52 (0.22)∗ −0.99 (0.28)∗∗∗ −0.18 (0.18) −0.49 (0.20)∗∗
Alkaline phosphatase (ALP) −0.06 (0.08) 0.06 (0.10) 0.12 (0.06)∗ 0.12 (0.07)
Alanine aminotransferase (ALT) −0.22 (0.10)∗ −0.16 (0.12) 0.07 (0.08) 0.01 (0.09)
Aspartate aminotransferase (AST) 0.32 (0.10)∗∗∗ 0.24 (0.12)∗ −0.15 (0.08) −0.04 (0.09)
Insulin-like growth factor I (IGF-I) −0.03 (0.07) −0.10 (0.09) −0.04 (0.06) −0.07 (0.06)
Insulin-like growth factor II (IGF-II) −0.10 (0.07) 0.03 (0.09) 0.16 (0.06)∗∗ 0.15 (0.06)∗
Sex hormone binding globulin (SHBG) 0.07 (0.07) −0.09 (0.09) −0.05 (0.06) −0.07 (0.06)
C-reactive protein (CRP) 0.13 (0.09) 0.23 (0.11)∗ 0.05 (0.07) 0.13 (0.08)
17*β*-Estradiol −0.09 (0.07) 0.01 (0.09) 0.02 (0.06) 0.02 (0.06)
Progesterone 0.13 (0.07)∗ 0.20 (0.08)∗ −0.02 (0.05) 0.05 (0.06)
Measurement methods^\#^
Histogram segmentation method (HSM) Reference
Full field digital mammography (FFDM) −0.02 (0.27) −0.12 (0.33) −0.10 (0.21) 0 (0.23)
Mathematical algorithm (MATH) 0.10 (0.27) 0.01 (0.33) −0.01 (0.21) 0 (0.23)
3D gradient-echo MRI (3DGRE) −0.04 (0.27) −0.13 (0.34) −0.01 (0.21) −0.10 (0.24)
Short tau inversion recovery MRI (STIR) −0.10 (0.27) −0.14 (0.33) −0.003 (0.21) −0.09 (0.23)
Race and ethnicity
Non-Hispanic White Reference
Hispanic 0.30 (0.16) 0.32 (0.20) −0.11 (0.13) 0.02 (0.14)
African-American 0.40 (0.20)∗ 0.34 (0.25) −0.07 (0.16) 0.06 (0.18)
Model *R* ^2^ 0.54 0.29 0.71 0.65
\*\*\**P* \< 0.001, \*\**P* \< 0.01, and \**P* \< 0.05 for predictor strength within a regression model.
^\#^All *P* values \>0.05 for interaction terms between predictor variables and measurement methods (results not shown).
######
Multivariate analysis model estimates for percent breast density (%-G), fibroglandular tissue volume (GV), fat tissue volume (FV), and total breast volume (TV) measured by five different methods (*n* = 320).
Explanatory variable Standardized *β*-estimates (SE)
-------------------------- --------------------------------- ----------------- ----------------- -----------------
BMI −0.62 (0.02)∗∗∗ 0.03 (0.03) 0.79 (0.02)∗∗∗ 0.71 (0.02)∗∗∗
Age −0.03 (0.2) −0.01 (0.02) −0.02 (0.02) −0.02 (0.02)
Age at menarche 0.03 (0.2) 0.03 (0.02) 0.01 (0.02) 0.02 (0.02)
Pregnancy, completed
Zero Reference
One −0.08 (0.08) −0.002 (0.1) 0.18 (0.07)∗ 0.16 (0.07)∗
Two −0.24 (0.07)∗∗ −0.37 (0.08)∗∗∗ 0.005 (0.06) −0.14 (0.06)∗
Three and more −0.41 (0.07)∗∗∗ −0.69 (0.08)∗∗∗ −0.12 (0.06)∗ −0.35 (0.06)∗∗∗
Measurement method^a,b^
HSM Reference
GE −0.003 (0.06) −0.01 (0.07) 0.002 (0.05) 0 (0.05)
MATH 0.01 (0.06) 0.01 (0.07) 0.001 (0.05) 0 (0.05)
3DGRE 0.002 (0.06) −0.003 (0.07) 0.003 (0.05) 0 (0.06)
STIR 0.03 (0.06) 0.01 (0.08) −0.02 (0.05) −0.02 (0.06)
Race and ethnicity
Non-Hispanic White race Reference
Hispanic race 0.16 (0.05)∗∗ 0.33 (0.06)∗∗∗ −0.12 (0.04)∗∗ 0.01 (0.04)
African-American race 0.47 (0.06)∗∗∗ 0.39 (0.08)∗∗∗ −0.22 (0.05)∗∗∗ −0.05 (0.06)
Model *R* ^2^ **0.38** **0.09** **0.58** **0.51**
^a^HSM, histogram segmentation method; FFDM, full field digital mammography unit; MATH, mathematical algorithm; 3DGRE, 3D gradient echo; STIR, short tau inversion recovery.
^b^Interaction terms between predictors and measurement methods were all not significant (results not shown).
\*\*\**P* \< 0.001, \*\**P* \< 0.01, and \**P* \< 0.05.
[^1]: Academic Editor: Ian S. Fentiman
| |
Only days prior to the team’s opening matchup against North Carolina, Syracuse announced that running backs Abdul Adams and Jarveon Howard have opted out of the upcoming season.
Adams and Howard were projected to be the team’s primary ball carriers in 2020, but neither participated in preseason training camp nor appeared on the team’s depth chart.
Adams toted the ball 87 times for 336 yards rushing and three scores as a junior and added 141 yards receiving. Howard, another piece of the 2019 backfield committee, rushed for 337 yards and three touchdowns as a sophomore.
Stepping in for the Orange will be redshirt freshman Jawhar Jordan and redshirt junior Markenzy Pierre. Jordan appeared in four games in 2019 and posted 15 carries for 105 yards and 2 receptions for 87 yards, including an 81-yard reception against the University of Louisville. Pierre appeared in 12 games in 2019 for the Orange, primarily on special teams, and contributed 102 all-purpose yards.
In the team’s press conference Tuesday night, quarterback Tommy DeVito spoke highly of Jordan and Pierre. He said that they brought an element of surprise to the offense and complimented their work-ethic.
“Those two guys, I know they’ve been working their butt off the whole summer and whole fall camp,” DeVito said. “They’ve earned the trust and respect of everybody on the offense, so I have all of my trust in them.”
Babers said that opt-outs have been kept internal to avoid competitive advantages for opponents. No further information has been released by head coach Dino Babers at this time. Other players who decided to sit out include projected starting linebacker Tyrell Richards and defensive lineman Copper Dawson.
DeVito expresses confidence in team
Looking to build on a solid first season at the helm of SU’s offense, quarterback Tommy DeVito said he plans to expand his squad’s capabilities by taking more chances in the passing game.
“Obviously we want to take care of the ball, but part of this offense is driving a fast car and pushing it to the limit,” DeVito said Tuesday. “We’re just getting the ball into our playmakers’ hands and letting them work.”
DeVito, who threw for 2,360 yards, 19 touchdowns and five interceptions last year, said that he added weight to his frame this past offseason to withstand the punishment of a full ACC college football season, as well as bettering his understanding of defensive schemes.
As for SU’s own defense, the linebacking corps lost an experienced contributor with senior Tyrell Richards’ decision to sit out this upcoming season.
Defensive back Andre Cisco said Tuesday that the unit would gain more cohesion with time and their coachability and athleticism would overcome their inexperience.
Cisco’s 12 interceptions entering the 2020 season are the most of any active player in the FBS, and he is second in the country with 1.27 pass break-ups per game, according to cuse.com. Cisco operates as a hybrid linebacker/safety for the Orange, commonly known as a rover.
Offensive lineman Airon Servais enters his senior season against North Caorlina on Saturday and said that his role as a vocal leader for the team has expanded this season. He said that the team is comfortable as a unit and ready for any surprises thrown their way by the uncertainty of the season.
SU alumni land roster spots across NFL
With the conclusion of NFL training camps and the impending start of the 2020 season, active rosters across the league were trimmed to 53 players. Numerous former Syracuse standouts found themselves on active rosters or expanded practice squad rosters for the season.
Longtime NFL starters Chandler Jones and Justin Pugh of the Arizona Cardinals are expected to start once again for the team in 2019, according to ESPN. Jones, an edge rusher and three-time Pro Bowl selection, was selected in the first round of the 2012 NFL Draft. Pugh, an offensive lineman, was taken in the first round in 2013.
Punters Riley Dixon and Sterling Hofrichter made the active rosters for the New York Giants and the Atlanta Falcons respectively. Dixon was drafted by the Giants in the seventh round of the 2016 NFL Draft. Hofrichter, a 2019 graduate, was selected in the seventh round in 2020 Falcons.
Defensive tackle Alton Robinson joined the Seattle Seahawks as a fifth round pick in 2020 and was named to the active roster. Linebacker Zaire Franklin was retained by the Indianapolis Colts, who drafted him in the seventh round in 2018. Wide receiver Trishton Jackson joined the Los Angeles Rams as an undrafted free agent this offseason.
Joining practice squads around the league were linebacker Evan Foster (Cardinals), offensive lineman Koda Martin (Cardinals) and defensive tackle Chris Slayton (Giants).
I’m so honored to be elected team captain for the third consecutive season! Can’t wait to get rollin on Sunday! Sackman out!
— ♛Chandler Jones (@chanjones55) September 9, 2020
ACC unveils Fall Olympic Sports schedules
Syracuse Olympic sports athletes from cross country, field hockey, volleyball, and men’s and women’s soccer will be allowed to participate in conference games following a schedule released by the ACC.
The cross country season will run from Sept. 11 to Oct. 24, with seven weeks of meets. The ACC Championships will be Oct. 30, at WakeMed Soccer Park, located in Cary, N.C. All 15 women’s and all 15 men’s teams will receive an invitation.
An official schedule has yet to be released, but the men’s team looks to defend its ACC title as the No. 2 ranked team, and the women’s team ranked No. 4 in the ACC Preseason Coaches’ Poll.
The field hockey team will play six games from through Nov. 1, hosting Duke on Sept. 18, at J.S. Coyne Stadium. The ACC Championship will include all seven teams starting Nov. 5, at Duke’s Williams Field at Jack Katz Stadium.
The team returns all but one starter and was ranked No. 6 in the field hockey Preseason ACC Coaches’ Poll.
Women’s soccer kicks off the season against Pittsburgh on Sept. 17, and will play nine games. The top eight teams will be invited to the ACC Tournament beginning Nov. 10, with quarterfinal matchups.
Men’s soccer is scheduled to play from Sept. 10 to Nov. 8, and the Orange face the Virginia in Charlottesville, Va., on Sept. 18. The conference was divided into the North and South Regions with Syracuse in the former. The top four teams from each region will compete in the ACC Tournament starting Nov. 15.
Beginning with a home game against Pittsburgh on Sept. 25, the volleyball team will play eight games until the end of the regular season on Oct. 25. The 15 teams were split into three regions, and each team will play its four region opponents twice. No postseason schedule has been announced. | https://www.thenewshouse.com/sports/syracuse-football-coronavirus-opt-outs/ |
---
abstract: 'Designing a space mission is a computation-heavy task. Software tools that conduct the necessary numerical simulations and optimizations are therefore indispensable. The usability of existing software, written in Fortran and MATLAB, suffers because of high complexity, low levels of abstraction and out-dated programming practices. We propose Python as a viable alternative for astrodynamics tools and demonstrate the proof-of-concept library Plyades which combines powerful features with Pythonic ease of use.'
author:
- 'Helge Eichhorn$^{\setcounter{footnotecounter}{1}\fnsymbol{footnotecounter}\setcounter{footnotecounter}{2}\fnsymbol{footnotecounter}}$ [^1][^2], Reiner Anderl$^{\setcounter{footnotecounter}{2}\fnsymbol{footnotecounter}}$[^3]'
title: 'Plyades: A Python Library for Space Mission Design'
---
data modeling, object-oriented programming, orbital mechanics, astrodynamics
Introduction \[introduction\]
=============================
Designing a space mission trajectory is a computation-heavy task. Software tools that conduct the necessary numerical simulations and optimizations are therefore indispensable and high numerical performance is required. No science mission or spacecraft are exactly the same and during the early mission design phases the technical capabilities and constraints change frequently. Therefore high development speed and programmer productivity are required as well. Due to its supreme numerical performance Fortran has been the top programming language in many organizations within the astrodynamics community.
At the European Space Operations Center (ESOC) of the European Space Agency (ESA) a large body of sometimes decades old Fortran77 code remains in use for mission analysis tasks. While this legacy code mostly fulfills the performance requirements[ ]{} usability and programmer productivity suffer. One reason for this is that Fortran is a compiled language which hinders exploratory analyses and rapid prototyping. The low level of abstraction supported by Fortran77 and programming conventions, like the 6-character variable name limit and fixed-form source code, that are in conflict with today’s best practices are a more serious problem, though. The more recent Fortran standards remedy a lot of these shortcomings, e.g. free-form source in Fortran90 or object-oriented programming features in Fortran2003, but also introduce new complexity, e.g. requiring sophisticated build systems for dependency resolution. Compiler vendors have also been very slow to implement new standards. For example this year the Intel Fortran compiler achieved full support of the Fortran2003 standard, which was released in 2005 [@IFC15].
Due to these reasons Fortran-based tools and libraries have been generally used together with programming environments with better usability such as MATLAB. A common approach for developing mission design software at ESOC is prototyping and implementing downstream processes such as visualization in MATLAB and then later porting performance-intensive parts or the whole system to Fortran77. The results are added complexity through the use of the MEX-interface for integrating Fortran and MATLAB, duplicated effort for porting, and still a low-level of abstraction because the system design is constrained by Fortran’s limitations.
Because of the aforementioned problems some organizations explore possibilities to replace Fortran for future development. The French space agency CNES (Centre National D’Études Spatiales) for instance uses the Java-based Orekit library [@Ore15] for its flight dynamics systems.
In this paper we show why Python and the scientific Python ecosystem are a viable choice for the next generation of space mission design software and present the Plyades library. Plyades is a proof-of-concept implementation of an object-oriented astrodynamics library in pure Python. It makes use of many prominent scientific Python libraries such as Numpy, Scipy, Matplotlib, Bokeh, and Astropy. In the following we discuss the design of the Plyades data model and conclude the paper with an exemplary analysis.
Why Python? \[why-python\]
==========================
Perez, Granger, and Hunter [@PGH11] show that the scientific Python ecosystem has reached a high level of maturity and conclude that Python has now entered a phase where it’s clearly a valid choice for high-level scientific code development, and its use is rapidly growing. This assessment also holds true for astrodynamics work as most of the required low-level mathematical and infrastructural building blocks are already available, as shown below:
- Vector algebra: NumPy [@WCV11]
- Visualization: Matplotlib [@JDH07], Bokeh [@BDT15]
- Numerical integration and optimization: SciPy [@JOP01]
- High performance numerics: Cython [@BBC11], Numba [@NDT15]
- Planetary ephemerides: jplephem [@BRh15]
Another advantage is Python’s friendliness to beginners. In the United States Python was the most popular language for teaching introductory computer science (CS) courses at top-ranked CS-departments in 2014 [@PGu14]. Astrodynamicists are rarely computer scientists but mostly aerospace engineers, physicists and mathematicians. Most graduates of these disciplines have only little programming experience. Moving to Python could therefore lower the barrier of entry significantly.
It is also beneficial that the scientific Python ecosystem is open-source compared to the MATLAB environment and commercial Fortran compilers which require expensive licenses.
Design of the Plyades Object Model \[design-of-the-plyades-object-model\]
=========================================================================
The general idea behind the design of the Plyades library is the introduction of proper abstraction. We developed a domain model based on the following entities which are part of every analysis in mission design:
- orbits (or trajectories),
- spacecraft state vectors,
- and celestial bodies.
The Body Class \[the-body-class\]
---------------------------------
The Body class is a simple helper class that holds physical constants and other properties of celestial bodies such as planets and moons. These include
- the name of the body,
- the gravitational parameter $\mu$,
- the mean radius $r_m$,
- the equatorial radius $r_e$,
- the polar radius $r_p$,
- the $J_2$ coefficient of the body’s gravity potential.
The State Class \[the-state-class\]
-----------------------------------
To define the state of a spacecraft in space-time we need the following information
- the position vector ($\vec{r} \in \mathbf{r^3}$),
- the velocity vector ($\vec{v} \in \mathbf{r^3}$),
- the corresponding moment in time, the so-called epoch,
- the reference frame in which the vectors are defined,
- the central body which also defines the origin of the coordinate system,
- and additional spacecraft status parameters such as mass.
While the information could certainly be stored in a single Numpy-array an object-oriented programming (OOP) approach offers advantages. Since all necessary data can be encapsulated in the object most orbital characteristics can be calculated by calling niladic or monadic instance methods. Keeping the number of parameters within the application programming interface (API) very small, as recommended by Robert C. Martin [@RCM08], improves usability, e.g. the user is not required to know the order of the function parameters. OOP also offers the opportunity to integrate the `State` class with the Python object model and the Jupyter notebook to provide rich human-friendly representations.
State vectors also provide methods for backwards and forwards propagation. Through propagation trajectories are generated, which are instances of the `Orbit` class.
The Orbit Class \[the-orbit-class\]
-----------------------------------
In contrast to the `State` class which represents a single state in space-time the `Orbit` class spans a time interval and contains several spacecraft states. It provides all necessary tools to analyze the evolution of the trajectory over time including
- quick visualizations in three-dimensional space and two-dimensional projections,
- evolution of orbital characteristics,
- and determination of intermediate state vectors.
Exemplary Usage \[exemplary-usage\]
===================================
In this example we use the Plyades library to conduct an analysis of the orbit of the International Space Station (ISS)[ ]{}. We obtain the inital state data on August 28, 2015, 12:00h from NASA realtime trajectory data [@NAS15] and use it to instantiate a Plyades `State` object as shown below.
\PY{n}{iss\PYZus{}r} \PY{o}{=} \PY{n}{numpy}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}
\PY{o}{\PYZhy{}}\PY{l+m+mf}{2775.03475}\PY{p}{,}
\PY{l+m+mf}{4524.24941}\PY{p}{,}
\PY{l+m+mf}{4207.43331}\PY{p}{,}
\PY{p}{]}\PY{p}{)} \PY{o}{*} \PY{n}{astropy}\PY{o}{.}\PY{n}{units}\PY{o}{.}\PY{n}{km}
\PY{n}{iss\PYZus{}v} \PY{o}{=} \PY{n}{numpy}\PY{o}{.}\PY{n}{array}\PY{p}{(}\PY{p}{[}
\PY{o}{\PYZhy{}}\PY{l+m+mf}{3.641793088}\PY{p}{,}
\PY{o}{\PYZhy{}}\PY{l+m+mf}{5.665088604}\PY{p}{,}
\PY{l+m+mf}{3.679500667}\PY{p}{,}
\PY{p}{]}\PY{p}{)} \PY{o}{*} \PY{n}{astropy}\PY{o}{.}\PY{n}{units}\PY{o}{.}\PY{n}{km}\PY{o}{/}\PY{n}{astropy}\PY{o}{.}\PY{n}{units}\PY{o}{.}\PY{n}{s}
\PY{n}{iss\PYZus{}t} \PY{o}{=} \PY{n}{astropy}\PY{o}{.}\PY{n}{time}\PY{o}{.}\PY{n}{Time}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{2015\PYZhy{}08\PYZhy{}28T12:00:00.000}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
\PY{n}{frame} \PY{o}{=} \PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ECI}\PY{l+s+s1}{\PYZsq{}}
\PY{n}{body} \PY{o}{=} \PY{n}{plyades}\PY{o}{.}\PY{n}{bodies}\PY{o}{.}\PY{n}{EARTH}
\PY{n}{iss} \PY{o}{=} \PY{n}{plyades}\PY{o}{.}\PY{n}{State}\PY{p}{(}\PY{n}{iss\PYZus{}r}\PY{p}{,} \PY{n}{iss\PYZus{}v}\PY{p}{,} \PY{n}{iss\PYZus{}t}\PY{p}{,} \PY{n}{frame}\PY{p}{,} \PY{n}{body}\PY{p}{)}
The position (`iss_r`) and velocity (`iss_v`) vectors use the units functionality from the Astropy package [@ASP13] while the timestamp (`iss_t`) is an Astropy `Time` object. The constant `EARTH` from the `plyades.bodies` module is a `Body` object and provides Earth’s planetary constants.
The resulting `State` object contains all data necessary to describe the current orbit of the spacecraft. Calculations of orbital characteristics are therefore implemented with the `@property` decorator, like shown below, and are instantly available.
\PY{n+nd}{@property}
\PY{k}{def} \PY{n+nf}{elements}\PY{p}{(}\PY{n+nb+bp}{self}\PY{p}{)}\PY{p}{:}
\PY{k}{return} \PY{n}{kepler}\PY{o}{.}\PY{n}{elements}\PY{p}{(}\PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{body}\PY{o}{.}\PY{n}{mu}\PY{p}{,} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{r}\PY{p}{,} \PY{n+nb+bp}{self}\PY{o}{.}\PY{n}{v}\PY{p}{)}
We compute the following orbital elements for the orbit of the ISS:
- Semi-major axis: $a=6777.773$ km
- Eccentricity: $e=0.00109$
- Inclination: $i=51.724$ deg
- Longitude of ascending node: $\Omega=82.803$ deg
- Argument of periapsis: $\omega=101.293$ deg
- True anomaly: $\nu=48.984$ deg
Based on the orbital elements derived quantities like the orbital period can be determined.
In the idealized two-body problem which assumes a uniform gravity potential the only orbital element that changes over time is the true anomaly. It is the angle that defines the position of the spacecraft on the orbital ellipse. By solving Kepler’s equation we can determine the true anomaly for every point in time and derive new Cartesian state vectors [@DAV13].
\PY{n}{kepler\PYZus{}orbit} \PY{o}{=} \PY{n}{iss}\PY{o}{.}\PY{n}{kepler\PYZus{}orbit}\PY{p}{(}\PY{p}{)}
\PY{n}{kepler\PYZus{}orbit}\PY{o}{.}\PY{n}{plot3d}\PY{p}{(}\PY{p}{)}
We now call the `kepler_orbit` instance method to solve Kepler’s equation at regular intervals until one revolution is completed. The trajectory that comprises of the resulting state vectors is stored in the returned `Orbit` object. By calling `plot3d` we receive a three-dimensional visualization[ ]{} of the full orbital ellipse as shown in figure [ DUrolerefDUroleref docutilsrolerefdocutilsroleref 3d ]{}.
We can achieve a similar result, apart from numerical errors, by numerically integrating Newton’s equation:$$\label{newton}
\vec{\ddot{r}} = -\mu \frac{\vec{r}}{|\vec{r}|^3}$$Plyades uses the DOP853 integrator from the `scipy.integrate` suite which is an 8th-order Runge-Kutta integrator with Dormand-Prince coefficients. By default the propagator uses adaptive step-size control and a simple force model that only considers the uniform gravity potential (see equation [ DUrolerefDUroleref docutilsrolerefdocutilsroleref newton ]{}).
\PY{n}{newton\PYZus{}orbit} \PY{o}{=} \PY{n}{iss}\PY{o}{.}\PY{n}{propagate}\PY{p}{(}
\PY{n}{iss}\PY{o}{.}\PY{n}{period}\PY{o}{*}\PY{l+m+mf}{0.8}\PY{p}{,}
\PY{n}{max\PYZus{}step}\PY{o}{=}\PY{l+m+mi}{500}\PY{p}{,}
\PY{n}{interpolate}\PY{o}{=}\PY{l+m+mi}{100}
\PY{p}{)}
\PY{n}{newton\PYZus{}orbit}\PY{o}{.}\PY{n}{plot\PYZus{}plane}\PY{p}{(}\PY{n}{plane}\PY{o}{=}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{XZ}\PY{l+s+s1}{\PYZsq{}}\PY{p}{,} \PY{n}{show\PYZus{}steps}\PY{o}{=}\PY{n+nb+bp}{True}\PY{p}{)}
In this example we propagate for 0.8 revolutions and constrain the step size to 500 seconds to improve accuracy. We also interpolate additional state vectors between the integrator steps for visualization purposes.
The trajectory plot in figure [ DUrolerefDUroleref docutilsrolerefdocutilsroleref numerical ]{} also includes markers for the intermediate integrator steps.
Since the shape of the Earth is rather an irregular ellipsoid than a sphere Earth’s gravity potential is also not uniform. We can model the oblateness of the Earth by including the second dynamic form factor $J_2$ in the equations of motion as shown in equation [ DUrolerefDUroleref docutilsrolerefdocutilsroleref j2 ]{}.$$\label{j2}
\vec{\ddot{r}} = -\mu \frac{\vec{r}}{|\vec{r}|^3} - \frac{3}{2} \frac{\mu J_2 R_e^2}{|\vec{r}|^5} \begin{bmatrix} x \left(1 - 5\frac{z^2}{|\vec{r}|^2}\right) \\ y \left(1 - 5\frac{z^2}{|\vec{r}|^2}\right) \\ z \left(3 - 5\frac{z^2}{|\vec{r}|^2}\right) \end{bmatrix}$$When introducing this perturbation we should expect that the properties of the orbit will change over time. We will now analyze these effects further.
Plyades allows the substitution of force equations with a convenient decorator-based syntax that is illustrated in the next code listing.
\PY{n+nd}{@iss.gravity}
\PY{k}{def} \PY{n+nf}{newton\PYZus{}j2}\PY{p}{(}\PY{n}{f}\PY{p}{,} \PY{n}{t}\PY{p}{,} \PY{n}{y}\PY{p}{,} \PY{n}{params}\PY{p}{)}\PY{p}{:}
\PY{n}{r} \PY{o}{=} \PY{n}{np}\PY{o}{.}\PY{n}{sqrt}\PY{p}{(}\PY{n}{np}\PY{o}{.}\PY{n}{square}\PY{p}{(}\PY{n}{y}\PY{p}{[}\PY{p}{:}\PY{l+m+mi}{3}\PY{p}{]}\PY{p}{)}\PY{o}{.}\PY{n}{sum}\PY{p}{(}\PY{p}{)}\PY{p}{)}
\PY{n}{mu} \PY{o}{=} \PY{n}{params}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{body}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{mu}\PY{o}{.}\PY{n}{value}
\PY{n}{j2} \PY{o}{=} \PY{n}{params}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{body}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{j2}
\PY{n}{r\PYZus{}m} \PY{o}{=} \PY{n}{params}\PY{p}{[}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{body}\PY{l+s+s1}{\PYZsq{}}\PY{p}{]}\PY{o}{.}\PY{n}{mean\PYZus{}radius}\PY{o}{.}\PY{n}{value}
\PY{n}{rx}\PY{p}{,} \PY{n}{ry}\PY{p}{,} \PY{n}{rz} \PY{o}{=} \PY{n}{y}\PY{p}{[}\PY{p}{:}\PY{l+m+mi}{3}\PY{p}{]}
\PY{n}{f}\PY{p}{[}\PY{p}{:}\PY{l+m+mi}{3}\PY{p}{]} \PY{o}{+}\PY{o}{=} \PY{n}{y}\PY{p}{[}\PY{l+m+mi}{3}\PY{p}{:}\PY{p}{]}
\PY{n}{pj} \PY{o}{=} \PY{o}{\PYZhy{}}\PY{l+m+mi}{3}\PY{o}{/}\PY{l+m+mi}{2}\PY{o}{*}\PY{n}{mu}\PY{o}{*}\PY{n}{j2}\PY{o}{*}\PY{n}{r\PYZus{}m}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{/}\PY{n}{r}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{5}
\PY{n}{f}\PY{p}{[}\PY{l+m+mi}{3}\PY{p}{]} \PY{o}{+}\PY{o}{=} \PY{o}{\PYZhy{}}\PY{n}{mu}\PY{o}{*}\PY{n}{rx}\PY{o}{/}\PY{n}{r}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{3} \PY{o}{+} \PY{n}{pj}\PY{o}{*}\PY{n}{rx}\PY{o}{*}\PY{p}{(}\PY{l+m+mi}{1}\PY{o}{\PYZhy{}}\PY{l+m+mi}{5}\PY{o}{*}\PY{n}{rz}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{/}\PY{n}{r}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)}
\PY{n}{f}\PY{p}{[}\PY{l+m+mi}{4}\PY{p}{]} \PY{o}{+}\PY{o}{=} \PY{o}{\PYZhy{}}\PY{n}{mu}\PY{o}{*}\PY{n}{ry}\PY{o}{/}\PY{n}{r}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{3} \PY{o}{+} \PY{n}{pj}\PY{o}{*}\PY{n}{ry}\PY{o}{*}\PY{p}{(}\PY{l+m+mi}{1}\PY{o}{\PYZhy{}}\PY{l+m+mi}{5}\PY{o}{*}\PY{n}{rz}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{/}\PY{n}{r}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)}
\PY{n}{f}\PY{p}{[}\PY{l+m+mi}{5}\PY{p}{]} \PY{o}{+}\PY{o}{=} \PY{o}{\PYZhy{}}\PY{n}{mu}\PY{o}{*}\PY{n}{rz}\PY{o}{/}\PY{n}{r}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{3} \PY{o}{+} \PY{n}{pj}\PY{o}{*}\PY{n}{rz}\PY{o}{*}\PY{p}{(}\PY{l+m+mi}{3}\PY{o}{\PYZhy{}}\PY{l+m+mi}{5}\PY{o}{*}\PY{n}{rz}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{o}{/}\PY{n}{r}\PY{o}{*}\PY{o}{*}\PY{l+m+mi}{2}\PY{p}{)}
After propagating over 50 revolutions the perturbation of the orbit is clearly visible within the visualization in figure [ DUrolerefDUroleref docutilsrolerefdocutilsroleref perturbed ]{}. A secular (non-periodical) precession of the orbital plane is visible. Thus a change in the longitude of the ascending node should be present.
We can plot the longitude of the ascending node by issuing the following command:
\PY{n}{j2\PYZus{}orbit}\PY{o}{.}\PY{n}{plot\PYZus{}element}\PY{p}{(}\PY{l+s+s1}{\PYZsq{}}\PY{l+s+s1}{ascending\PYZus{}node}\PY{l+s+s1}{\PYZsq{}}\PY{p}{)}
The resulting figure [ DUrolerefDUroleref docutilsrolerefdocutilsroleref osculating ]{} shows the expected secular change of the longitude of the ascending node.
Future Development \[future-development\]
=========================================
As of this writing Plyades has been superseded by the Python Astrodynamics project [@PyA15]. The project aims to merge the three MIT-licensed, Python-based astrodynamics libraries Plyades, Poliastro [@JCR15] and Orbital [@FML15] and provide a comprehensive Python-based astrodynamics toolkit for productive use.
Conclusion \[conclusion\]
=========================
In this paper we have discussed the current tools and programming environments for space mission design. These suffer from high complexity, low levels of abstraction, low flexibility, and out-dated programming practices. We have then shown why the maturity and breadth of the scientific Python ecosystem as well as the usability of the Python programming language make Python a viable alternative for next generation astrodynamics tools. With the design and implementation of the proof-of-concept library Plyades we demonstrated that it is possible to create powerful yet simple to use astrodynamics tools in pure Python by using scientific Python libraries and following modern best practices. The Plyades work has lead to the foundation of the Python Astrodynamics project, an inter-european collaboration, whose goal is the development of a production-grade Python-based astrodynamics library.
[ASP13]{}
[^1]: Corresponding author: <[email protected]>
[^2]: Technische Universität Darmstadt, Department of Computer Integrated Design
[^3]: Copyright©2015 Helge Eichhorn et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. http://creativecommons.org/licenses/by/3.0/
| |
Planet Earth harbors as many secrets beneath the surface as above. Most point to subterranean civilizations that once lived on the surface of the planet and then - either through cataclysm or something as yet unexplained - inhabitants were forced to build underground dwellings to survive. These are in fact inserts in our simulation set in place as the quest for answers to age old questions continues ... who are we, why are we here, and where are we going. Many of you may have explored, or fantasized, being an archaeologist who discovers the answers sought throughout history. For today we move to Turkey and its ancient mysteries.
In my book Sarah and Alexander, Sarah, an American archaeologist and author, and her friend Amaan, a Turkish archaeologist and teacher, race against time to the place where the Tigris and Euphrates Rivers once met - their first step on a journey that brings them full circle. A UFO rises from the murky water projecting a beam of light onto a mountain nearby, which they then climb. Cutting through ancient shrubbery, aligned with the beam of light, they discover a stone door bearing a carved indentation matching the amulet given to Sarah by Alexander when first they met as children. As Sarah places the amulet into the indentation - a stone door pivots open to reveal clues about the origins of civilization and their place in it.
My second encounter with ancient Turkey was the 2009 Miniseries The Last Templar. FBI agent Sean Daley (Scott Foley) and archaeologist Tess Chaykin (Mira Sorvino) engage in a chase across three continents in search of the lost secret of the Knights Templar. Their adventure takes them to Turkey where they stumble upon an entrance to the lost ancient underground city of Derinkuyu. Today CNN Travel featured a story and video about Derinkuyu called Inside Turkey's incredible underground city. Check it out.
My next adventure linked to ancient Turkey introduced me to Gobekli Tepe a Neolithic archaeological site occasionally mentioned in the TV series Ancient Aliens - its massive megaliths another group of stone structures that one could believe were created by aliens who visited the planet in the past. At 12,000 years old, Gobekli Tepe predates humanity's oldest known civilizations. Its megalithic temples were cut from rock millennia before the 4,500-year-old pyramids in Egypt, 5,000-year-old Stonehenge in England, or 7,000-year-old Nabta Playa, the oldest known astronomical site. | https://www.crystalinks.com/WhereTheRiversMeet.html |
The NFL announced it is considering new overtime rules. The new rules will be considered by the competition committee and, if approved, would be implemented for future playoff games only. I've heard two versions of the proposal, and in this article I'll analyze both.
The version I heard goes like this: the team that loses the coin flip is always guaranteed at least one possession. If the coin-flip winner (which I'll refer to as the 'first team') scores and the second team matches the score, then the game reverts to the sudden death format. If the first team fails to score and the second team does, the second team wins. If the first team scores a field goal, and the second team scores a touchdown, the second team wins.
The other version, reported by ESPN, would guarantee the second team a possession only if the first team does not score a touchdown. In other words, if the first team scores a field goal, the second team gets one possession to match or exceed the score. If the first team scores a touchdown on its first possession, however, the game is over.
One very important consideration in the "response" format of football overtime is that the second team has the luxury of knowing what kind of score it needs to live on. In either version of the proposal, If the first team scores a field goal, the second team would never consider punting, and therefore would be far more likely to score than otherwise.
A Simple Model
Here is a simple model to illustrate just how big an effect this would be. A typical touchdown drive consists of 4 or 5 first downs, including the score itself. First downs are converted 67% of the time, moving the chains about 16 or so yards each conversion. This simple model makes for about a 70-yard drive, scoring a touchdown about 20% of the time (0.67 ^ 4 = 0.20). This makes sense because an offense typically scores a touchdown 20% of the time starting at its own 30.
If a team has all four downs available to it, how often could it score a touchdown? When a team uses a 4th down to convert, it essentially has two 3rd downs. Third downs of any distance in the NFL are converted 48% of the time. So in the 33% of series which go to a 4th down, an additional 16% of series will result in conversions (0.33 * 0.52 = 0.14), for a total conversion rate of 81%.
This wouldn't just mean a 14% increase in the chance of scoring a touchdown. It would potentially more than double the chance. Football drives are recursive, meaning the same process is repeated over and over. If you increase the rate of success for each sub-process, the overall success rate increases geometrically. Touchdown drives would increase from 20% of all to drives to well over 40% (0.81 ^ 4 = 0.43)!
Even if a team only needs a field goal and not a touchdown, it would still benefit from using its fourth downs, if necessary, prior to entering field goal range. And if they convert, they are still free to continue the drive seeking a touchdown.
Analysis
I'll first look at the version of the proposal in which the second team always gets an opportunity to respond whether the first team scores a field goal or touchdown. The illustration below is known as an extensive form of a game, sometimes referred to as an event tree. All possible permutations are considered moving left to right. Each state of the game is represented by a box (the 'nodes'), and the probability of moving from one node to another is noted on each arrow (the 'edges').
If the two teams tie at the end of the first two possessions, the game basically reverts to the sudden death format we're already familiar with. We've already seen this movie, and we know how it ends. Ignoring the possibility of a tie (which would be slightly higher now), the first team wins a little more than 60% of the time. So no matter how it plays out, with turnovers or punts or a kickoff return, we can collapse the game into a sub-game of 60%/40% in favor of the first team.
In this case, I made some conservative assumptions based on typical drive outcome rates. A team that needs a touchdown to match will get it 40% of the time. A team that needs a field goal to match would match it 20% of the time while getting the touchdown 30% of the time for the win.
We can sum up all the total probabilities for the scenarios in which the first team wins. The actual probabilities are just rough estimates for typical drives, so this analysis submits a method for finding an answer rather than declaring an actual answer with much certainty. Feel free to replace my probabilities with your own. In any case, this estimate results in a 52%/48% outcome in favor of the first team, which would be a significantly smaller advantage than the current format.
(You might notice that when the first team does not score, the game effectively becomes a sudden death game, except now the second team has the 60/40 advantage.)
The other version does not reduce the advantage as much. If the second team cannot respond to a touchdown, it's not going to win as often. Here is the extensive form when the second team is guaranteed a response to a field goal by the first team.
In this version of the proposal, the advantage for the first team is 56%/44%, still smaller than the advantage in the current format.
Either way, the important point is that football is a recursive process, and its outcomes vary exponentially with respect to the sub-game outcomes. If a team's chance of converting a first down is increased by only a few percentage points, they would be able to score far more frequently. Second teams would be able to respond more easily than most people might think, including those on the NFL competition committee. I think the NFL will be surprised how often an overtime game under the new rules reverts to the sudden death format. | https://www.advancedfootballanalytics.com/2010/03/new-proposed-overtime-rules.html |
ZTE (Zhongxing Telecommunication Equipment Corporation) is a Chinese enterprise. It is the 4th largest mobile manufacturer in the world. In the year 1985, ZTE was founded as Zhongxing Semiconductor Co., Ltd in Shenzhen, Guangdong province. In March 1993, it changed its name to ‘Zhongxing New Telecommunications Equipment Co., Ltd’. At that time, it had the capital of RMB 3 million. With time, the firm evolved into the publicly traded ZTE Corporation. It made an initial public offering (IPO) on the Shenzhen stock exchange in 1997 and another on the Hong Kong stock exchange in December 2004.
In 2006, it made its place in the international telecom market. The firm took 40% of new global orders for CDMA networks. By 2008, ZTE was selling its products in 140 countries and had acquired a global customer base. By 2009, the company became the third-largest vendor of GSM telecom equipment worldwide. ZTE branded GSM gear sold out about 20% across the world. In 201, it got 7% of the key 3GPP Long Term evolution patents. During the same year, it launched the world’s first smartphone with dual GPS/GLONASS navigation, MTS 945.
ZTE works in three business units that include carrier networks (54%), terminals (29%) and telecommunication (17%). Main products of ZTE are wireless, exchange, access, optical transmission, and data telecommunications gear; mobile phones; and telecommunications software. Products that give value-added services are also offered by the company like video on demand and streaming media. ZTE basically sells goods under its own name but it is an original equipment manufacturer (OEM) too. | https://www.downloadzte.com/evolution-of-zte |
Using bold colors, modern patterns and engaging compositions, students will use acrylic paint combined with colored pencil to create inspired portraits of animals. In this three-day intensive, student will work with basic drawing skills, contouring lines, loose gridding and blocking techniques to layout the portraits on wooden panels. Final pieces will be completed in acrylic. Each student can expect to complete two 18” x 24” wooden panels in this workshop.
All materials are provided with registration. Students may work from a selection of provided photos or are welcome to bring their own photos for reference. | https://www.nevadaart.org/learn/e-l-cord-museum-school/class-schedule/class/modern-animals-in-acrylic/ |
At 02.56 GMT on July 21st, 1969, Neil Armstrong made the first human footprint on the moon, followed shortly by Buzz Aldrin. The ancient dream had been realized. 12,000 years after we crawled out of those caves, not even a single tick of the astronomical clock, we set foot upon another world. The Wright brothers’ first heavier than air flight, for a distance shorter than the wingspan of a 747, took place in December 2003. Within 66 years men had crossed a quarter of a million miles of space to walk upon the moon.
President Kennedy’s famous speech of September, 1962, had challenged America to achieve the goal “before this decade is out.” America, a nation of pioneers, could once again choose to make its own destiny instead of meekly falling in with whatever the future might hold. His country accepted that challenge, and in a decade of innovation, practised and perfected all of the techniques that would be needed to succeed.
I was in my 20s in that thrilling decade. We watched first the Mercury flights, then the Gemini ones, and finally the Apollo series. I stayed up, as did millions across the world, thrilling to those words, “Tranquility Base here. The Eagle has landed.” It was an event that united the world. There was a sense of species pride, that human beings had undertaken so difficult and dangerous a voyage of exploration, and had succeeded. We felt, indeed, that it had been “one giant leap for mankind.” If we could go to the moon, we could do anything.
The project’s costs drew some criticism, in that the money could have been spent on social housing, just as Queen Isabella could have spent her money on social housing instead of funding the explorations of Columbus, and Manchester City Football Club could be closed down, and its players sold off to fund social housing. Every achievement of humankind, be it artistic, scientific, engineering, exploration or adventure, could always have had its funds diverted instead to promoting social equality. That way we would achieve nothing, not even social equality.
There was a Cold War to be won, and the US moon landings played a part in undermining Communist morale and the belief that history was on their side. They played a part in letting us see our world as a whole, and realizing how tiny a part of the universe it occupies, and how fragile it seems.
To me one of the most telling lines from the Apollo programme was the observation made, looking at the blue and white globe of the Earth, that “Everything that ever happened took place down there.” That was where the dinosaurs were wiped out by a cosmic collision after a reign of 250m years. That was where primates first stood erect. That was where the pharaohs built pyramids and where Greeks fought Trojans. It was where Caesar was assassinated and Napoleon was defeated. It was where, more recently, Hitler, Stalin and Mao murdered their millions. It all happened on that tiny blue marble lost in the vastness of space.
It gave us a sense of being one world, and the hope must be that the anniversary of the first landings will rekindle the feeling that we share this planet. There are signs that the event is already rekindling the drive for adventure and discovery that will take us further into the exciting unknown. | |
Candy Gourlay, blogger extraordinaire and award winning author of ‘Tall Story’ and ‘Shine’ joined us for an afternoon packed with information and laughter. Over two sessions, Candy covered both what to do to write a great book, and what to do before and after publication to make sure you stand out amongst the crowd:
Session 1 –
A Writer is Just a Rabbit Staring at Rabbit Holes
Candy Gourlay once described writing novels as not so much falling down a rabbit hole as diving into it. Writers are like rabbits staring at rabbit holes that represent character, story and setting. We must dive in, and go as far as we can go, in order for our stories to reveal themselves. Candy will talk about how there are no half measures in unfolding a story and how we are all better authors for the journey we have to take.
Session 2 –
If Everyone’s Now Got a Platform, How are You Going to Stand Out?
We are all wise to the Internet now, all tweeting, blogging, Facebooking. But is anybody listening? Candy Gourlay was an early adopter of the Internet, blogging before Blogger was invented, learning web design before content management systems became ubiquitous, and trying out every new thing that came along from MySpace to Tumblr. She will be discussing the author’s biggest challenge: being discovered by readers. There will be tips and tricks and strategies. But be warned. Ultimately, it will be about writing a good book.
After the events, we took Candy for a well deserved dinner with a view! | https://hongkong.scbwi.org/2020/07/25/an-afternoon-of-craft-and-marketing-with-candy-gourlay-september-13-2014/ |
The emotional pain I endured was unbearable. I could hardly breathe; every thought I had, was about my pain. The traumatic experience kept playing over and over in my head. I remember thinking why? Why me? Little did I realize that I was hurting myself even more by taking this kind of approach to life?
One day I saw a bumper sticker that read “Happiness will come to you when you are happy”. I did not comprehend what I had read and wondered how happiness could only come to me when I am happy? I went home and reflected on those words, I was determined to make sense of it in order to attain happiness. I then came to the conclusion that although I am not happy right now, I can surely attract happiness. What happened next was amazing.
I said the following out loud: “I attract happiness into my life” and suddenly I began to feel a tad better. As I said the words out loud I felt the strength of these words travel through my body, I made sure that every part of my body felt the power of happiness. I continued repeating the words, the more I said the more I believed it and the belief came from adding the step of allowing my body to feel the power of the word happiness. “I Attract happiness into my life” even though the pain was there, saying these words out loud and feeling the power of the words in my body seemed to ease the pain. Furthermore, it gave me the courage to take action towards healing the pain.
If you want to experience a positive emotion, you must attract it. And to attract it, you must say the word associated with the emotion. An example of this is love, the more you say the word love the more you feel love. Repeating the word generates a positive feeling in your body. Think about a person that constantly lies, eventually they begin to believe their own lies. The same applies to affirmations the more you say it, the more your brain registers it as a fact.
“I am feeling good today”. The more you say it, the more you begin to believe it. Allowing your body to feel the power of the word “good” helps you to believe in what you are saying. Using the combination of words and feelings, make it easier to constantly feel good. Affirmations are positive statements that describe an end result you desire.
To enhance the affirmation process, use the following technique. Focus all of your attention onto your heart and breathe in through your nose while saying the following in your mind “I attract happiness into my life” and then breathe out while saying the following in your mind “I attract happiness into my life” Do this at least 5 times, I found this technique to be most effective when using affirmations. Remember to use this breathing technique while using the other affirmations in the book.
“I ATTRACT happiness into my life.”
Excerpt from the book Thoughts of Perfection. Get the book and learn how To enhance the affirmation process. | https://ebrahimmongratie.com/2016/12/24/the-power-of-the-i-attract-affirmations/ |
There was an error. Please try again.
This page is already listed in your guide. Please choose a different page to add.
Cook
EmployerQuality Inn & Conference Centre
Date PostedJuly 19, 2021
LanguagesEnglish
-
Location Red Deer, AB
-
Earnings $17.00 to $19.00 hourly (to be negotiated)
-
Work Hours 30 to 44 hours per week
-
Position Permanent Full Time
-
Vacancies 2 Vacancies
-
Closing Date Aug 02, 2021
Employer
Quality Inn & Conference Centre
Languages
English
Education
Secondary (high) school graduation certificate
Cook Categories
Cook (general)
Cuisine Specialties
- International
- Canadian
- Gluten-free
Experience
1 year to less than 2 years
Additional Skills
- Maintain records of food costs, consumption, sales and inventory
- Analyze operating costs and other data
- Food safety/handling skills
- Prepare dishes for customers with food allergies or intolerances
- Requisition food and kitchen supplies
- Prepare and cook food on a regular basis, or for special guests or functions
- Prepare and cook meals or specialty foods
Work Setting
- Restaurant
- Hotel, motel, resort
Specific Skills
- Train staff in preparation, cooking and handling of food
- Inspect kitchens and food service areas
- Prepare and cook special meals for patients as instructed by dietitian or chef
- Clean kitchen and work areas
- Prepare and cook complete meals or individual dishes and foods
Security and Safety
- Bondable
- Criminal record check
Work Site Environment
- Odours
- Hot
- Cold/refrigerated
- Non-smoking
Transportation/Travel Information
- Own transportation
- Public transportation is available
Work Conditions and Physical Capabilities
- Fast-paced environment
- Work under pressure
- Repetitive tasks
- Handling heavy loads
- Physically demanding
- Attention to detail
- Standing for extended periods
- Overtime required
Ability to Supervise
- Staff in various areas of responsibility
- 5-10 people
Work Location Information
Urban area
Personal Suitability
- Initiative
- Effective interpersonal skills
- Flexibility
- Team player
- Excellent oral communication
- Client focus
- Dependability
- Judgement
- Reliability
- Organized
How to Apply
Anyone who can legally work in Canada can apply for this job. If you are not currently authorized to work in Canada, the employer will not consider your job application.
By e-mail:
Advertised Until
Aug 02, 2021
Important notice: This job posting has been provided by an external employer.The Government of Alberta and the Government of Canada are not responsible for the accuracy, authenticity or reliability of the content.
- METHODICAL
-
Interest in compiling information to monitor food and supplier inventory
- OBJECTIVE
-
Interest in precision working to prepare and cook complete meals and individual dishes and foods, and to prepare and cook special meals for patients as instructed by dietitians and chefs
- directive
-
Interest in supervising kitchen helpers; and in overseeing subordinate personnel in the preparation, cooking and handling of food
The interest code helps you figure out if you’d like to work in a particular occupation.
It’s based on the Canadian Work Preference Inventory (CWPI), which measures 5 occupational interests: Directive, Innovative, Methodical, Objective and Social.
Each set of 3 interest codes is listed in order of importance.
A code in capital letters means it’s a strong fit for the occupation.
A code in all lowercase letters means the fit is weaker. | https://alis.alberta.ca/occinfo/alberta-job-postings/cook/34775851/ |
Introduction {#Sec1}
============
Human decisions are guided by beliefs about current features of the environment. These beliefs often must be inferred from indirect and uncertain evidence. For example, deciding to go to a restaurant typically relies on a belief about its current quality, which can be inferred from past experiences at that restaurant. This inference process is particularly challenging in dynamic environments whose features can change unexpectedly (e.g., a new chef was just hired). In these environments, people tend to follow normative principles and update their beliefs dynamically and adaptively, such that beliefs are updated more strongly when existing beliefs are weak or irrelevant, and/or the new evidence is strong or surprising^[@CR1]--[@CR3]^. Recent studies have identified potential neural substrates of this adaptive belief-updating process, including univariate and multivariate activity patterns for uncertainty and surprise in several brain regions, including dorsomedial frontal cortex, anterior insula, lateral prefrontal cortex, and lateral parietal cortex^[@CR2],[@CR4]--[@CR7]^. The goal of the present study was to gain deeper insights into how these representations might interact dynamically to support adaptive belief updating.
We focused on how changes in belief updating relate to changes in functional connectivity between brain regions with task-relevant activity modulations. Functional connectivity reflects statistical dependencies between regional activity time series^[@CR8]^ and can form functional-connectivity networks that provide new perspectives on brain function^[@CR9]--[@CR11]^. Many recent studies of learning have focused on brain network reconfigurations occurring between naïve and well-learned phases in various domains such as motor, perceptual, category, spatial, or value learning^[@CR12]--[@CR22]^. In these cases, functional connectivity associated with the fronto-parietal system decreased gradually as learning progressed and this change in connectivity was associated with individual learning or performance^[@CR13],[@CR19],[@CR22]^. In dynamic environments, however, people progressively learn the current state and then re-initialize learning once the state changes. Thus, we expected frequent reconfigurations in functional connectivity, as learning shifts between slower and faster updating in response to changes in uncertainty and surprise. In addition, although brain regions that encode uncertainty and surprise participate in multiple networks, including the fronto-parietal system, dorsal attention system, salience system, and memory system^[@CR2],[@CR4]--[@CR7]^, based on previous network analyses of learning in stable environments we hypothesized that the fronto-parietal system would serve a particularly important role in network reconfiguration during learning in dynamic environments.
In the current study, we aimed to identify such frequent reconfigurations in functional connectivity during adaptive belief updating. A key to our approach was the use of an unsupervised machine-learning technique known as nonnegative matrix factorization (NMF)^[@CR23]^. NMF decomposes the whole-brain network into subgraphs, which describe patterns of functional connectivity across the entire brain, and the time-dependent magnitude with which these subgraphs are expressed. Briefly, a subgraph is a weighted pattern of functional interactions that statistically recurs as the brain network evolves over time. We chose NMF because it provides two key advantages over other approaches to matrix factorization, such as principal components analysis (PCA) or independent components analysis (ICA)^[@CR24],[@CR25]^. First, NMF yields a parts-based representation of the network, in which the individual components are strictly additive---a constraint that is not present in PCA and ICA. This important feature enables interpretation of the resulting subgraph and time-dependent expression coefficients on the basis of their positive distance from zero. Second, NMF does not enforce an orthogonality or independence constraint and, therefore, allows subgraphs to overlap in their structure. This property may more effectively model distinct subgraphs that may be jointly related via weak connections and better account for the flexibility of neural systems, such that one connection between regions can be involved in multiple systems or cognitive functions. Recently, NMF has been used to identify network dynamics during rest and task states^[@CR25],[@CR26]^ and to determine how these dynamics vary across development^[@CR24]^. Here, we extend the use of this technique to examine changes in network dynamics linked to task variables and individual differences.
Our results show that that uncertainty and surprise, task variables that drive the adjustment of learning, are related to the temporal expression of specific patterns of functional connectivity (i.e., specific subgraphs). These specific patterns of functional connectivity prominently involve the fronto-parietal network. We also show that the dynamic modulation of these patterns of functional connectivity (i.e., subgraph expression) are associated with individual differences in learning.
Results {#Sec2}
=======
Belief updating is influenced by uncertainty and surprise {#Sec3}
---------------------------------------------------------
Participants performed a predictive-inference task during functional magnetic resonance imaging (fMRI) (Fig. [1a](#Fig1){ref-type="fig"}). For this task, participants positioned a bucket to catch a bag that dropped from an occluded helicopter. The location of the bag was sampled with noise from a distribution centered on the location of the helicopter. The location of the helicopter usually remained stable but occasionally changed suddenly and unpredictably (with an average probability of change of 0.1 across trials). In addition, whether the bag (if caught) was rewarded or neutral was assigned randomly on each trial and indicated by color. This task challenged participants to form and update a belief about a latent variable (the location of the helicopter) based on noisy evidence (the location of dropped bags).Fig. 1Overview of the task and theoretical model of belief updating (McGuire et al., 2014).**a** Sequence of the task. At the start of each trial, participants predict where a bag will drop from an occluded helicopter by positioning a bucket on the screen. After participants submit their prediction, the bag drops and any rewarded coins that fall in the bucket are added to the participant's score. The location of the last prediction and the last bag drop are noted on the next trial. **b** An example sequence of trials. Each data point represents the location of a bag on each trial (yellow for rewarded coins, gray for neutral coins). The dashed line represents the true generative mean. The mean changes occasionally. The cyan line represents the prediction from a normative model of belief updating. The inset equation shows how the model updates beliefs (*B*~*t*~ = belief, *X*~*t*~ = observed outcome, *α*~*t*~ = learning rate on trial *t*). The vertical dashed line represents the boundary of the noise conditions: high-noise (left) and low-noise condition (right). Noise refers to the variance of the generative distribution. **c** Two learning components from the normative model. Change-point probability (CPP) reflects the likelihood that a change-point happens, which is increased when there is an unexpectedly large prediction error. Relative uncertainty (RU) reflects the uncertainty about the generative mean relative to the environmental noise, which is increased after high CPP trials and decays slowly as more precise estimates of the generative mean are possible. The inset formula shows how CPP and RU contribute to single trial estimates of learning rates.
We previously described a theoretical model approximating the normative solution for this task^[@CR2]^. This theoretical model takes the form of a delta-rule and approximates the Bayesian ideal observer. Beliefs (*B*~*t*+1~) are updated based on the difference between the current outcome location (*X*~*t*~) and the predicted location (*B*~*t*~), with the extent of updating controlled by a learning rate (*α*~*t*~; Fig. [1b](#Fig1){ref-type="fig"}). Trial-by-trial learning rates are determined by two factors: (i) change-point probability (CPP), which is the probability that a change-point has happened and represents a form of belief surprise; and (ii) relative uncertainty (RU), which is the reducible uncertainty regarding the current state relative to the irreducible uncertainty that results from environmental noise and represents a form of belief uncertainty (Fig. [1c](#Fig1){ref-type="fig"}). Learning rates are higher when either CPP or RU is higher: *α*~*t*~ = CPP + (1 − CPP)RU.
We previously reported how participants' predictions were influenced by both normative and nonnormative factors and how these factors are encoded in univariate and multivariate activity^[@CR2],[@CR7]^. Participants updated their beliefs more when the value of CPP or RU was higher, consistent with the normative model. Participants also updated their beliefs more when the outcome was rewarded, however, which is not a feature of the normative model. CPP, RU, and reward, as well as residual updating (belief updating not captured by CPP, RU, or reward), were all encoded in univariate and multivariate brain activity in distinct regions^[@CR2],[@CR7]^. In the current study, we built on these previous findings and investigated how these factors, as well as individual differences in how these factors influence belief updating, are related to the dynamics of whole-brain functional connectivity.
NMF identified ten subgraphs that varied over time {#Sec4}
--------------------------------------------------
We used NMF to decompose whole-brain functional connectivity over time into specific patterns of functional connectivity, called subgraphs, and quantified the expression of these patterns over time. To perform NMF, we first defined regions of interest (ROIs) based on a previously defined parcellation^[@CR27]^ (Fig. [2a](#Fig2){ref-type="fig"}) and extracted blood-oxygenation-level-dependent (BOLD) time series for each ROI (Fig. [2b](#Fig2){ref-type="fig"}). For every pair of ROIs, we calculated the Pearson correlation coefficient between the BOLD time series in 10-TR (25 s) time windows, offset by 2 TRs for each time step (and thus 80% overlap between consecutive time windows). This procedure thus yielded a matrix whose entries represented time-dependent changes in the strengths of these pairwise correlations in the brain during the task. We unfolded each time window from this correlation matrix (Fig. [2c](#Fig2){ref-type="fig"}) into a one-column vector, and then concatenated these vectors from all time windows and all participants (Fig. [2d](#Fig2){ref-type="fig"}). As required for NMF, we transformed the resulting matrix to have strictly nonnegative values: we duplicated the full matrix, set all negative values to zero in the first copy, and set all positive values to zero in the second copy before multiplying all remaining values by negative one. Thus, we divided the final full data matrix into two-halves, with one-half containing the positive correlation coefficients (zero if the coefficient was negative) and one-half containing the absolute values of the negative correlation coefficients (zero if the coefficient was positive)^[@CR26]^. This procedure ensured that our approach did not give undue preference to either positive or negative functional connectivity, and that subgraphs were identified based on both positive and negative functional connectivity.Fig. 2Schematic overview of the method.**a** Regions of interest (ROIs). Functional MRI BOLD signals were extracted from spherical ROIs based on the previously defined parcellation^[@CR27]^. We only kept 247 ROIs that had usable data from all subjects. Each ROI can be assigned to one of 13 putative functional systems. The brain figure was visualized by the BrainNet Viewer^[@CR42]^ under the Creative Commons Attribution (CC BY) license (<https://creativecommons.org/licenses/by/4.0/>). **b** An example of Pearson correlation coefficients calculated between regional BOLD time series over the course of the experiment. Each BOLD time series was divided into 10-TR (25 s) time windows, and consecutive time windows were placed every 2 TRs leading to 80% overlap between consecutive time windows. Pairwise Pearson correlation coefficients were calculated between ROI time series in each time window. **c** An example of edge strength over time. In each time window, there were 247\*(247 − 1)/2 edges. **d** Nonnegative matrix factorization (NMF). In each time window, the matrix of edge strengths was unfolded into one column. Then, edges from all time windows in all participants were concatenated into a single matrix. Each row in the full data matrix contained an edge (pairwise correlation coefficients between BOLD time series from two ROIs) and each column contained a time window (across all scans and participants). Correlation values in this matrix were strictly non-negative; the full data matrix was divided into two halves, with one half containing the positive pairwise correlation coefficients (zero if the correlation coefficient was negative) and one half containing the absolute values of negative pairwise correlation coefficients (zero if the correlation coefficient was positive). Thus, subgraphs were identified based on both the similarity of positive functional connectivity and the similarity of negative functional connectivity together. Then, NMF was applied to decompose the concatenated matrix into a matrix **W**, which encoded the strengths of edges for each subgraph, and a matrix **H**, which encoded the time-dependent expression of each subgraph. For example, the strength of edges of the fourth subgraph (the fourth column in the matrix **W**) can be folded into a squared matrix, reflecting the edge strength between every pair of ROIs.
We applied NMF to this matrix (**A**) to identify functional subgraphs and their expression over time. Specifically, we decomposed the full data matrix into a subgraph matrix **W** and an expression matrix **H** (Fig. [2d](#Fig2){ref-type="fig"}). The columns of **W** represent different subgraphs and the rows represent different edges (i.e., pairs of regions), with the value in each cell representing the strength of that edge (i.e., the functional connectivity strength for that pair of regions) for that subgraph. The rows of **H** represent different subgraphs, and the columns represent time windows, with the value in each cell representing the degree of expression of that subgraph in that time window. We implemented NMF by minimizing the residual error ($\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$||{\mathbf{A}} - {\mathbf{WH}}||_F^2$$\end{document}$) via three parameters: (i) the number of subgraphs (*k*), (ii) the subgraph regularization (*α*), and (iii) the expression sparsity (*β*) (Supplementary Fig. [1](#MOESM1){ref-type="media"}).
Using NMF, we identified ten subgraphs, which reflected patterns of functional connectivity strengths across every pair of regions in the brain, as well as the expression of these subgraphs over time. The full description of each subgraph specifies the edge strength between every pair of ROIs, corresponding to a 247 × 247 matrix. We calculated a simpler summary description that specifies the edge strength between every pair of functional systems in the previously defined parcellation, corresponding to a 13 × 13 matrix^[@CR27]^. Edges between ROIs were categorized according to the functional system of each ROI. To estimate the diagonal entries in the system-by-system matrix, we averaged the weights of all edges connecting two ROIs within a given system (Fig. [3a](#Fig3){ref-type="fig"}). To estimate the off-diagonal entries of the system-by-system matrix, we averaged the weights of all edges linking an ROI in one system with an ROI in another system. In line with common parlance, we refer to the edges within the same system as within-system edges, whereas we refer to the edges between two different systems as between-system edges. For presentation, we ordered and numbered the ten subgraphs according to the strength of within-system edges relative to that of between-system edges (Fig. [3b](#Fig3){ref-type="fig"}, Supplementary Fig. [2a--c](#MOESM1){ref-type="media"}). Finally, we thresholded the system-by-system matrix to show only edges that passed a permutation test (*p* \< 0.05 after the Bonferroni correction for multiple comparisons; see Methods).The full data matrix on which we performed NMF was divided into two-halves, with the first half corresponding to positive functional connectivity and the second half corresponding to negative functional connectivity. The expression matrix **H** was therefore also divided into two-halves, with the first half corresponding to positive expression over time and the second half corresponding to negative expression over time. Positive and negative expression coefficients were highly negatively correlated with each other across time for all the subgraphs (all *r* \< −0.61, all *p* \< 0.001). For the analyses of subgraph expression below, we thus constructed a measure of relative subgraph expression by subtracting the negative expression from the positive expression at each time point^[@CR26]^. Across subgraphs, the average relative expression across time was strongly correlated with the relative strength of within- versus between-system edges (Supplementary Fig. [2d--f](#MOESM1){ref-type="media"}). That is, higher within-system strength was associated with greater relative expression of the subgraph.Fig. 3Patterns of connectivity in subgraphs.**a** Converting edges between nodes into edges between systems. First, the edges of each subgraph can be folded into a square matrix, representing the edges between every pair of nodes (ROIs). Then, based on the 13 putative functional systems reported by Power et al. (2011), we categorized each edge according to the system(s) to which the two nodes (ROIs) belonged. We calculated the mean strength of edges linking a node in one system to a node in another system, and refer to that value as the between-system edge. Similarly, we calculated the mean strength of edges linking two nodes that both belong to the same system and refer to that value as the within-system edge. Edges between nodes and edges between systems were normalized into the scale between 0 and 1. **b** Edges between systems in the ten subgraphs identified by NMF. We show only significant edges (*p* \< 0.05 after the Bonferroni correction for multiple comparisons). For each subgraph, the top matrix shows the significant edges in that subgraph within or between systems. For example, Subgraph 1 has high edge strengths along the diagonal; thus, this subgraph describes functional connectivity that lies predominantly within functional systems. In contrast, subgraph 5 has high edge strengths along a single row and column, corresponding to the visual system; thus, this subgraph describes functional connectivity between the visual system and all other systems. Subgraphs varied in the degree to which they represent interactions within the same system (e.g., subgraph 1) versus interactions between different systems (e.g., subgraph 10). All nodes from systems involved in significant edges are shown on the brain below by the BrainNet Viewer^[@CR42]^ under the Creative Commons Attribution (CC BY) license (<https://creativecommons.org/licenses/by/4.0/>).
Normative factors modulated subgraph expression {#Sec5}
-----------------------------------------------
We investigated how CPP, RU, reward, and residual updating influenced the temporal expression of each subgraph. We identified a particularly strong relationship between the normative factors (CPP, RU, and the residuals that reflected the participants' subjective estimates of those variables) and subgraph 4, whose strongest edges were in the fronto-parietal task-control system, followed by the memory retrieval, salience and dorsal-attention systems (Fig. [4a, b](#Fig4){ref-type="fig"}). Specifically, we used multiple regression to estimate the trial-by-trial relationship between these four factors and the relative expression strength of each subgraph. For each subgraph, regression coefficients were fitted separately for each participant and were tested at the group level using *t* tests (Supplementary Fig. [3](#MOESM1){ref-type="media"}). Among the ten subgraphs, these four factors explained the most variance in the time-dependent relative expression of subgraph 4 (Supplementary Fig. [4](#MOESM1){ref-type="media"}), in each case showing positive modulations (CPP: mean ± SEM = 0.202 ± 0.053, *t*~31~ = 3.78, *p* \< 0.001; RU: 0.392 ± 0.077, *t*~31~ = 5.11, *p* \< 0.001; residual updating: 0.177 ± 0.079, *t*~31~ = 2.23, *p* = 0.033; Fig. [4c](#Fig4){ref-type="fig"}). We also evaluated the influence of head motion by including motion, as indexed as the relative root-mean-square of the six motion parameters, in the regression model. Motion was not significant (*p* = 0.29) and the effects of CPP, RU, and residual updating remained significant and of similar effect size.Fig. 4Temporal expression of subgraph 4 was related to task factors and individual differences.**a** Summary of the pattern of connectivity in subgraph 4. We summarized the pattern of connectivity as within-system strength (which is the value in the diagonal) and between-system strength (which is the average of values in the off-diagonal) for each system. The fronto-parietal system as well as three other systems (memory retrieval, salience, and dorsal attention) showed the strongest contributions to this subgraph in terms of both within-system and between-system strength. The 95% confidence interval of each system was estimated by boostrapping 10,000 times on the edges of that system. **b** Nodes for the top four systems with strong within-system and between-system strength. We showed the nodes of fronto-parietal system, memory retrieval system, salience system and dorsal attention system on the brain by the BrainNet Viewer^[@CR42]^ under the Creative Commons Attribution (CC BY) license (<https://creativecommons.org/licenses/by/4.0/>). **c** Modulation of temporal expression of subgraph 4 by task factors. A regression model that included CPP, RU, reward, and residual updating as predictors of temporal relative expression (calculated by subtracting negative expression from positive expression) of subgraph 4 was fitted for each participant, and coefficients were tested on the group level by *t* tests. The results showed positive effects of CPP, RU, and residual updating. Each point represents one participant. Error bars represent one SEM. (\**p* \< 0.05, \*\*\**p* \< 0.001) **d** The relationship between individual normative learning and the dynamic modulation of subgraph 4 expression by normative factors. This dynamic modulation was indexed as the sum of the coefficients of CPP and RU in (**c**), and represents the extent to which trial-by-trial expression was influenced by the two normative learning factors. There was a significant positive correlation across participants. Each point represents one participant. The red line represents the regression line and the shaded area represents the 95% confidence interval. **e** The relationship between individual normative learning and average relative expression of subgraph 4. There was a significant positive correlation across participants. Each point represents one participant. The red line represents the regression line and the shaded area represents the 95% confidence interval. Source data of **c**--**e** are provided as a Source Data file.
Although CPP or RU also modulated the relative expression of some other subgraphs (e.g., subgraphs 1, 3, and 7; Supplementary Fig. [3](#MOESM1){ref-type="media"}), below we focus on subgraph 4 for several reasons. First, the four factors we investigated explained more variance in the time-dependent relative expression of subgraph 4 than that of any other subgraph. Second, only on subgraph 4 were the effects of CPP and RU strong enough to survive correction for multiple comparisons across ten subgraphs. Third, only on subgraph 4 were the effects of CPP and RU robustly shown across analyses using different sized time windows.
Individual differences associated with subgraph expression {#Sec6}
----------------------------------------------------------
The expression of subgraph 4 was not only modulated by task factors that drive normative learning, but also varied across subjects in a manner that reflected individual differences in normative learning. As an index of normative learning, we estimated the influence of CPP and RU on trial-by-trial belief updates using multiple regression and took the sum of the regression coefficients of CPP (*β*~2~ in Eq. ([6](#Equ6){ref-type=""})) and RU (*β*~3~ in Eq. ([6](#Equ6){ref-type=""})) for each participant^[@CR2]^. This sum reflected how much each individual updated their beliefs in response to normative factors. We examined the relationship between individual differences in this normative belief-updating metric and two aspects of subgraph expression.
First, we examined the relationship between normative belief updating and the dynamic modulation of subgraph expression by normative factors (Supplementary Fig. [5](#MOESM1){ref-type="media"}). As an index of the dynamic modulation of subgraph expression by normative factors, we used the sum of the regression coefficients of CPP and RU on relative expression from the analyses above (Supplementary Fig. [3](#MOESM1){ref-type="media"}). We found a positive correlation between the dynamic modulation of subgraph 4 expression by normative factors and normative belief updating across participants (*r* = 0.448, *p* = 0.004; Fig. [4d](#Fig4){ref-type="fig"}). Second, We also found a positive correlation between the average relative expression of subgraph 4 and normative belief updating across participants (*r* = 0.332, *p* = 0.029; Fig. [4e](#Fig4){ref-type="fig"}; Supplementary Fig. [6](#MOESM1){ref-type="media"}). These effects were still significant when we controlled for the influence of motion on dynamic modulation or average relative expression, whereas the effects of motion itself were not significant (all *p* \> 0.31). These two results show that participants with the highest average relative expression of subgraph 4, and for whom the normative factors account for the most variance in the relative expression of subgraph 4 across time, tended to update their beliefs in a manner more consistent with the normative model than the other subjects.
Contribution of specific edges to the identified effects {#Sec7}
--------------------------------------------------------
Subgraph 4 describes both within- and between-system functional connectivity for multiple functional systems (Figs. [3](#Fig3){ref-type="fig"}b and [4a, b](#Fig4){ref-type="fig"}; Supplementary Fig. [2a--c](#MOESM1){ref-type="media"}). We next examined the contribution of specific edges (i.e., functional connectivity between specific pairs of brain regions) within subgraph 4 to the task and individual difference effects we observed for that subgraph.
The task-related modulations of subgraph 4 involved primarily between-system, not within-system, functional connectivity. Specifically, we re-estimated the effects of CPP, RU, reward, and residual updating on the relative expression of subgraph 4 using only within-system edges (i.e., only the diagonal cells of the system-by-system matrix in Fig. [3b](#Fig3){ref-type="fig"}; "Within") or only between-system edges (i.e., only the off-diagonal cells of the system-by-system matrix in Fig. [3b](#Fig3){ref-type="fig"}; "Between"). We compared these effects to our previous estimates using all edges (Fig. [5a](#Fig5){ref-type="fig"}; "All") through *t* tests. Removing the between-system edges (Within versus All) reduced the size of the estimated effects of CPP (mean ± SEM = −0.155 ± 0.042, *t*~31~ = −3.73, *p* \< 0.001), RU (−0.300 ± 0.062, *t*~31~ = −4.82, *p* \< 0.001), and residual updating (−0.140 ± 0.053, *t*~31~ = −2.63, *p* = 0.013). In contrast, removing the within-system edges (Between versus All) led to no reliable changes in these effects (all *p* \> 0.21). Further, in a direct comparison of the reduced subgraphs with only within- or between-system edges, the effects estimated with between-system edges only were stronger for CPP (0.151 ± 0.042, *t*~31~ = 3.63, *p* \< 0.001), RU (0.290 ± 0.063, *t*~31~ = 4.63, *p* \< 0.001), and residual updating (0.139 ± 0.048, *t*~31~ = 2.91, *p* = 0.007).Fig. 5The contribution of between-system and within-system edges to effects of task factors and individual differences on subgraph 4 expression.**a** The contribution of between-system and within-system edges to the effect of task factors on temporal relative expression of subgraph 4. To determine the relative contribution of between- and within-system edges on time-dependent subgraph 4 expression, we performed three comparisons on the effects estimated by different types of edges using *t* tests: within-system edges only (Within), between-system edges only (Between) and all edges (All). First, removing between-system edges (Within versus All) decreased the effect of CPP, RU and residual updating. Second, in contrast, after removing within-system edges (Between versus All), there was no significant change in these coefficients. Third, we directly compared the effects contributed from between-system edges only and from within-system edges only (Between versus Within). For between-system edges, there were stronger positive effects for CPP, RU, and residual updating. Error bars represent one SEM. (\**p* \< 0.05, \*\**p* \< 0.01, \*\*\**p* \< 0.001). **b** The contribution of between-system and within-system edges to the relationship between normative learning and dynamic modulation and average expression of subgraph 4. We performed the same three comparisons to determine the relative contribution of between- and within-system edges for each relationship with individual differences. For the effect of dynamic modulation, removing within-system edges (Between versus All) decreased the correlation coefficient. This correlation coefficient was also larger for within-system edges only than between-system edges only, but this effect was not statistically significant. For the effect of average expression, removing between-system edges (Within versus All) decreased the correlation coefficient, and the correlation coefficient was larger for between-system edges only than within-system edges only, though neither of these effects were statistically significant. Source data are provided as a Source Data file. Error bars represent one SEM. (\*\**p* \< 0.01).
The contributions of within- and between-system functional connectivity to the individual difference effects of subgraph 4 were less clear. For the relationship between individual differences in normative learning and average relative expression, the pattern across comparisons was similar to that observed for task effects (Fig. [5b](#Fig5){ref-type="fig"}), which would indicate a greater contribution of between-system edges, but none of the comparisons were statistically significant. In contrast, for the relationship between individual differences in normative learning and the dynamic modulation of subgraph 4, within-system edges appeared to be more important, as removing the within-system edges (Between versus All) reduced this correlation (difference = 0.048, *p* = 0.006; Fig. [5b](#Fig5){ref-type="fig"}).
Supplementary analyses identified contributions of specific functional systems (i.e., one row/column from the system-by-system matrix in Fig. [3b](#Fig3){ref-type="fig"}; Supplementary Fig. [7](#MOESM1){ref-type="media"}) and of specific system-by-system edges (i.e., one cell from the system-by-system matrix in Fig. [3b](#Fig3){ref-type="fig"}; Supplementary Fig. [8](#MOESM1){ref-type="media"}) to the task and individual difference effects on subgraph 4.
Robust effects across different sized time windows {#Sec8}
--------------------------------------------------
To determine the sensitivity of our results to the size of this time window, we repeated the entire procedure using shorter (8-TR/20 s window with 6-TR/15 s overlap; Supplementary Figs. [9](#MOESM1){ref-type="media"}--[12](#MOESM1){ref-type="media"}) or longer (12-TR/30 s window with 10-TR/25 s overlap; Supplementary Figs. [13](#MOESM1){ref-type="media"}--[16](#MOESM1){ref-type="media"}) time windows. That is, we shorten or lengthen the time window by the interval of one trial (\~5 s). With both shorter and longer time windows, we identified ten subgraphs. There was a high degree of similarity between the ten subgraphs identified in the main analysis and those identified using either shorter (edges between nodes: all *r* \> 0.81; edges between systems: all *r* \> 0.80) or longer (edges between nodes: all *r* \> 0.98; edges between systems: all *r* \> 0.98) time windows. With longer time windows, the relative expression of subgraph 4 still showed the same relationship to task factors (CPP and RU) and to individual differences in normative learning; with shorter time windows, these effects were also present but weaker.
Relationship between regional activity and connectivity {#Sec9}
-------------------------------------------------------
In our previous report, we described how CPP, RU, reward, and residual updating influenced univariate brain activity. In a final set of analyses, we examined the relationship between these previously reported univariate effects and the changes in dynamic functional connectivity we identified above.
The brain regions that were most strongly represented in subgraph 4 overlapped spatially with the brain regions whose activity was modulated reliably by CPP and RU in our previous report. As a measure of a region's involvement in subgraph 4, for each ROI, we calculated the mean strength of every edge between that ROI and all other ROIs in subgraph 4, and normalized these mean values between 0 and 1. We then related this metric to activation from our previous study^[@CR2]^, as measured by the *z*-statistic of the modulation effect of CPP or RU. This *z*-statistic indicated the effect size of change of univariate activity in response to CPP or RU across participants. Across all ROIs, there was a positive correlation between edge strength in subgraph 4 and activation for CPP (*r* = 0.403, *p* \< 0.0001; Fig. [6a](#Fig6){ref-type="fig"}) and activation for RU (*r* = 0.704, *p* \< 0.0001; Fig. [6b](#Fig6){ref-type="fig"}). The Surf Ice software (<https://www.nitrc.org/projects/surfice>) was used to show the map of normalized mean edge strengths for subgraph 4 alongside the thresholded activation maps for CPP and RU (Fig. [6c](#Fig6){ref-type="fig"}). Regions with stronger edge strength in subgraph 4, such as the insula, dorsomedial frontal cortex, dorsolateral prefrontal cortex, posterior parietal cortex, and occipital cortex, also tended to show stronger increases in activation with increases in CPP and RU.Fig. 6Relationship between edge strength of subgraph 4 and univariate task activations.**a** Relationship between the activation for CPP and the edge strength of subgraph 4. We calculated the Pearson correlation coefficient between the *z*-statistic for CPP from McGuire et al. (2014) and the edge strength across nodes in subgraph 4. Each data point represents an ROI. The edge strength for each ROI was calculated as the column sum of that ROI's edges to other ROIs, reflecting the summed interactions between that ROI and all others. The edges were normalized into the scale between 0 and 1. A significantly positive correlation was observed. The red line represents the regression line and the shaded area represents the 95% confidence interval. **b** Relationship between the activation for RU and the edge strength of subgraph 4. We observed a significant positive correlation between the *z*-statistic for RU from McGuire et al. (2014) and the edge strength across nodes in subgraph 4. The red line represents the regression line and the shaded area represents the 95% confidence interval. Source data of **a**, **b** are provided as a Source Data file. **c** Whole-brain thresholded activation maps for CPP and RU from McGuire et. al (2014) and whole-brain maps for edge strength of subgraph 4 in the current study.
In addition to these strong associations between univariate brain activation and edge strength, effects beyond those captured by univariate task activity also contributed to our dynamic functional connectivity results. To demonstrate this, we estimated functional connectivity from time-series that only contained task-modulated univariate activity, performed NMF on this matrix, and repeated all of our main analyses (Supplementary Figs. [17](#MOESM1){ref-type="media"}--[20](#MOESM1){ref-type="media"}). This analysis again identified a subgraph 4 whose strongest edges were in the fronto-parietal system, but it did not recapitulate all of the relationships between subgraph 4 expression and task factors and individual differences seen in our main analyses. These results implied that the dynamic functional connectivity patterns identified in our main analyses reflect a mixture of coordinated activity across regions (which can be captured by univariate analyses) and other statistical dependencies across regions that require network-based analyses.
Discussion {#Sec10}
==========
We identified a pattern of dynamic functional brain connectivity in human subjects performing a predictive-inference task. This pattern was expressed most strongly during times that demanded faster belief updating and was enhanced in individuals who most effectively used adaptive belief updating to perform the task. To identify this pattern, we used NMF, an unsupervised machine-learning technique that decomposes the full matrix of time-dependent functional connectivity into subgraphs (patterns of functional connectivity), and the time-dependent magnitude of these subgraphs. Among the subgraphs we identified in our data, the expression of one subgraph in particular was modulated reliably by three trial-by-trial factors that influenced the degree of behavioral belief updating: CPP (surprise), RU (uncertainty), and residual updating (updating unaccounted for by surprise or uncertainty). Notably, CPP and RU are factors that normatively promote greater belief updating, scaling the degree to which past observations are discounted relative to the most recent evidence. Residual updating likely captures, at least in part, deviations between the objective values of CPP and RU in the normative model and the individual's subjective estimates of those factors. Thus, the expression of this subgraph reflects not only normative factors that should influence belief updating but also likely fluctuations in subjective estimates of those factors. In addition to being modulated by these trial-by-trial task factors, expression of this subgraph also varied across individuals in a manner associated with individual differences in belief updating. Participants who tended to update their beliefs in a more normative manner---that is, with a stronger influence of surprise (CPP) and uncertainty (RU)---showed stronger dynamic modulation of the expression of this subgraph by normative factors and showed stronger average expression of this subgraph.
The subgraph modulated by surprise and uncertainty included interactions between multiple functional systems, most prominently the fronto-parietal task control, memory retrieval, salience, and dorsal attention systems (Figs. [3b](#Fig3){ref-type="fig"} and [4a](#Fig4){ref-type="fig"}). These systems, include multiple regions in the anterior insula, dorsolateral and dorsomedial frontal cortex, and lateral and medial parietal cortex (Figs. [4b](#Fig4){ref-type="fig"} and [6c](#Fig6){ref-type="fig"}). These regions showed a large degree of overlap with areas that we have previously shown to have increased univariate activation in response to both surprise and uncertainty (in this same dataset; Fig. [6](#Fig6){ref-type="fig"})^[@CR2]^. A smaller subset of these regions, including parts of the dorsomedial frontal cortex, anterior insula, inferior frontal cortex, posterior cingulate cortex, and posterior parietal cortex, was modulated not only by both normative (surprise and uncertainty) factors, but also by a non-normative one (reward). This smaller subset includes regions that participate in the fronto-parietal task-control, memory retrieval, salience, and dorsal attention systems.
Previously, we also reported regions whose univariate activity was modulated by either surprise or uncertainty alone. Surprise was associated selectively with activation in occipital cortex, and uncertainty was associated selectively with activation in anterior prefrontal and parietal cortex^[@CR2]^. We similarly have reported multivariate activation patterns that were associated selectively with either surprise or uncertainty alone^[@CR7]^. In the current study, we identified a key pattern of functional connectivity that was modulated by both surprise and uncertainty, but we did not identify any other pattern that was modulated reliably by either surprise or uncertainty alone. One possible explanation for this lack of a positive result was our need to use relatively long time windows (25 s, corresponding to 4--6 trials) in order to obtain reliable estimates of functional connectivity. These time windows likely included both the surprise elicited by change-points and the uncertainty that follows. Thus, functional connectivity related to surprise and uncertainty may have been difficult to dissociate temporally in our task and analysis design. Using a task that can temporally separate the tracking of surprise and uncertainty^[@CR28]^ might enable the identification of distinct patterns of functional connectivity for each factor.
The identified pattern of whole-brain functional connectivity was also expressed across individuals in a manner that varied with the degree to which they updated their beliefs more in line with the normative model. Thus, individual differences in learning were also reflected in features of individual functional connectomes. In our previous study, we noted a relationship between individual differences in normative learning and the degree to which activity in dorsomedial frontal cortex and anterior insula was modulated by normative factors (surprise and uncertainty)^[@CR2]^. Here, we showed that normative learning was also associated with how functional connectivity was modulated dynamically by the same normative factors. These new findings add to previous work showing that brain network dynamics can reflect individual differences in learning in various domains^[@CR12],[@CR13],[@CR15],[@CR19],[@CR22]^. Potentially, these differences in individual functional connectomes during learning could reflect individual differences in resting-state (task-independent) functional connectivity^[@CR29]^, which merits further study.
Functional connectivity captures many different kinds of statistical dependencies between brain regions, including those that result from task-driven co-activation. The strong association between neural activation and functional connectivity during periods of surprise and uncertainty in our results (Fig. [6](#Fig6){ref-type="fig"}), as well as previous studies in other domains^[@CR13],[@CR15],[@CR17],[@CR19],[@CR21],[@CR22]^, raises the possibility that the increases in functional connectivity between brain regions might have arisen because these regions became more tightly synchronized to external task events, without necessarily any increase in communication between them. To refute this possibility, we repeated our analyses on the predicted BOLD time series from univariate GLMs. These predicted time series, which contain only task-driven statistical dependencies between brain regions, could not recapitulate all of the effects that we observed in our actual BOLD time series. Specifically, we found modulations by task (e.g., the modulation of subgraph expression by surprise and residual updating) and individual differences (e.g., the relationship between individual differences in normative learning and the dynamic modulation of subgraph expression by normative factors) that were apparent only in the full, original functional connectivity matrices. Thus, these effects appear to include neural communications that do not simply reflect task-driven co-activation. Even though the changes in functional connectivity that we describe may reflect a mixture of task-driven and endogenous dynamics, the network analysis provides an important higher-level, reduced-dimensionality description of these changes.
A key feature of the brain-wide pattern of functional connectivity that we identified was connectivity involving the fronto-parietal task-control system. We characterized the complex pattern of functional connectivity in the learning-related subgraph by summarizing the connectivity according the putative functional system of each region^[@CR27]^. Among all the functional systems, the largest proportion of connectivity in the learning-related subgraph involved the fronto-parietal system. Connectivity associated with the fronto-parietal system has been shown to increase at the beginning of learning and decrease toward the later phases of learning^[@CR13],[@CR19],[@CR22]^. Our result extends this finding by showing that fronto-parietal functional connectivity is modulated dynamically in a trial-by-trial manner according to the need for new learning. That is, the pattern of functional connectivity captured by the learning-related subgraph increased after surprising task changes and then decreased gradually as more information was gained about the current state. The fronto-parietal system is thought of as a control system that is involved in flexible adjustments of behavior^[@CR30],[@CR31]^. In particular, connectivity between the fronto-parietal network and other systems has been shown to change in response to different task requirements^[@CR32]^. This type of flexible control is critical for learning in a dynamic environment, a context in which people should adjust their degree of belief updating in a context-dependent manner^[@CR1],[@CR4]^.
Although the learning-related subgraph was also characterized by a balanced strength of within-system connectivity and between-system connectivity, the critical features that changed in response to task dynamics involved primarily between-system connectivity. This result implies that faster learning was associated with a greater degree of integration between different functional systems. Several previous studies have shown that complex cognitive tasks are associated with more integration between systems^[@CR33]--[@CR36]^. Other work has shown that as a task becomes more practiced over time, the interaction between systems decreased while the connections within systems remained strong^[@CR13]^. Here, we demonstrated changes in integration on a fast time scale, as task demands varied from trial to trial. Integration between systems was greater during periods of the task when surprise or uncertainty was high, and therefore there was a need to update one's beliefs and base them more on the current evidence than on expectations developed from past experience.
In this study, we provided a network-based perspective on the neural substrates of learning in dynamic and uncertain environments. In such environments, people should flexibly adjust between slow and fast learning: beliefs should be updated more strongly when new evidence is most informative, such as when the environment undergoes a surprising change or beliefs are highly uncertain. Here, we identified a specific brain-wide pattern of functional connectivity (subgraph) that fluctuated dynamically with changes in surprise and uncertainty. The dynamics and expression of this pattern of functional connectivity also varied across individuals in a manner that reflected differences in learning. This pattern was expressed more strongly and was more strongly modulated by surprise and uncertainty in people who updated their beliefs in a more normative manner, with a stronger influence of surprise and uncertainty. The most important aspect of this learning-related pattern of functional connectivity is functional integration between the fronto-parietal and other functional systems. These results establish a novel link between dynamics adjustments in learning and dynamic, whole-brain changes in functional connectivity.
Methods {#Sec11}
=======
Participants {#Sec12}
------------
The dataset has been described in our previous reports^[@CR2]^. Thirty-two individuals participated in the fMRI experiment: 17 females, mean age = 22.4 years (SD = 3.0; range: 18--30). Human subject protocols were approved by the Internal Review Board in University of Pennsylvania. All participants provided informed consent before the experiment.
Task {#Sec13}
----
Each participant completed four 120-trial runs during functional magnetic resonance imaging. In each run, participants performed a predictive-inference task (Fig. [1a](#Fig1){ref-type="fig"}). On each trial, participants made a prediction about where the next bag would be dropped from an occluded helicopter by positioning a bucket along the horizontal axis (0--300) of the screen. The location of the bag was sampled from a Gaussian distribution with a mean (the location of the helicopter) and a standard deviation (noise). The standard deviation was high (SD = 25) or low (SD = 10) in different runs. The location of the helicopter usually remained stable but it changed occasionally. The probability of change was zero for the first three trials after a change and 0.125 for the following trials. When the location changed, the new location was sampled from a uniform distribution. Correctly predicting the location of the bag resulted in coins landing in the bucket. These coins either had positive or neutral value depending on their color, which was randomly assigned for each trial.
Behavior model {#Sec14}
--------------
We applied the same normative model described in our previous study^[@CR2]^. An approximation to the ideal observer solution to this task updates beliefs according to a delta learning rule (Fig. [1b](#Fig1){ref-type="fig"})$$\documentclass[12pt]{minimal}
\begin{document}$$\delta _t = X_t - B_t,$$\end{document}$$$$\documentclass[12pt]{minimal}
\begin{document}$$B_{t + 1} = B_t + \alpha _t \times \delta _t,$$\end{document}$$where *δ*~*t*~ is the prediction error, which is the difference between the observed outcome (bag drop location, *X*~*t*~) and the prediction (bucket location, *B*~*t*~). Beliefs are updated in proportion to the prediction error, and this proportion is determined by *α*~*t*~, the learning rate. The learning rate is adjusted adaptively on each trial according to two normative factors (Fig. [1c](#Fig1){ref-type="fig"})$$\documentclass[12pt]{minimal}
\begin{document}$$\alpha _t = {\mathrm{\Omega }}_t + \left( {1 - {\mathrm{\Omega }}_t} \right) \times \tau _t,$$\end{document}$$where Ω~*t*~ is the CPP and *τ*~*t*~ is the RU. The learning rate, CPP and RU are all constrained to be between zero and one, and the learning rate increases when either CPP or RU is high. CPP reflects the likelihood that a change-point has happened^[@CR1],[@CR2]^$$\documentclass[12pt]{minimal}
\begin{document}$${\mathrm{\Omega }}_t = \frac{{U\left( {X_t{\mathrm{|}}0,300} \right)H}}{{U\left( {X_t{\mathrm{|}}0,300} \right)H + N\left( {X_t{\mathrm{|}}B_t,\sigma _t^2} \right)\left( {1 - H} \right)}},$$\end{document}$$where $\documentclass[12pt]{minimal}
\begin{document}$$U\left( {X_t{\mathrm{|}}0,300} \right)$$\end{document}$ indicates the probability of *X*~*t*~ from a uniform distribution between 0 and 300, $\documentclass[12pt]{minimal}
\begin{document}$$N\left( {X_t{\mathrm{|}}B_t,\sigma _t^2} \right)$$\end{document}$ indicates the probability of *X*~*t*~ from a Gaussian distribution with mean of *B*~*t*~ and variance of $\documentclass[12pt]{minimal}
\begin{document}$$\sigma _t^2$$\end{document}$, $\documentclass[12pt]{minimal}
\begin{document}$$\sigma _t^2$$\end{document}$ is the variance of predictive distribution of the bag location, and *H* is the average probability of change (0.1) across trials.
RU reflects the uncertainty about the current location of the helicopter relative to the amount of noise in the environment^[@CR2]^$$\documentclass[12pt]{minimal}
\begin{document}$$\tau _{t + 1} = \frac{{{\mathrm{\Omega }}_t\sigma _N^2 + \left( {1 - {\mathrm{\Omega }}_t} \right)\tau _t\sigma _N^2 + {\mathrm{\Omega }}_t(1 - {\mathrm{\Omega }}_t)(\delta _t(1 - \tau _t))^2}}{{{\mathrm{\Omega }}_t\sigma _N^2 + \left( {1 - {\mathrm{\Omega }}_t} \right)\tau _t\sigma _N^2 + {\mathrm{\Omega }}_t\left( {1 - {\mathrm{\Omega }}_t} \right)\left( {\delta _t\left( {1 - \tau _t} \right)} \right)^2 + \sigma _N^2}},$$\end{document}$$where $\documentclass[12pt]{minimal}
\begin{document}$$\sigma _N^2$$\end{document}$ is the variance of outcome distribution used to generate the location of bag. There are three terms present in both the numerator and denominator. The first term is the variance of the helicopter distribution conditional on a change-point while the second term is the variance of the helicopter distribution conditional on no change-point. The third term reflects the variance due to the difference in mean between the two conditional distributions. The three terms together capture the uncertainty about the location of the helicopter.
Figure [1c](#Fig1){ref-type="fig"} shows an example of the dynamics of CPP and RU. CPP increases when there is an unexpectedly large prediction error. RU increases after CPP increases and decays slowly as more precise estimates of the helicopter location are possible.
As in our previous study, a regression model was applied to investigate how the factors in this normative model, as well as other aspects of the task, influenced participants' belief updates. We regressed trial-by-trial updates (*B*~*t*+1~ − *B*~*t*~) against the prediction error (*δ*~*t*~), the interaction between prediction error and the two factors from the normative model, CPP (Ω~*t*~) and RU (*τ*~*t*~), as well as the interaction between prediction error and whether the outcome was rewarded or not^[@CR2]^. The form of the regression model can be written as$$\documentclass[12pt]{minimal}
\begin{document}$${\mathrm{Update}}_t = \beta _0 + \beta _1\delta _t + \beta _2\delta _t{\mathrm{\Omega }}_t + \beta _3\delta _t\left( {1 - {\mathrm{\Omega }}_t} \right)\tau _t + \beta _4\delta _t{\mathrm{Reward}}_t + \beta _5{\mathrm{Edge}}_t + \varepsilon,$$\end{document}$$where Edge is regressor of no interest that captures the tendency to avoid updating toward the edges of the screen ($\documentclass[12pt]{minimal}
\begin{document}$$(150 - B_{t + 1}) | {150 - B_{t + 1}} |$$\end{document}$). If subjects used a fixed learning rate (Eq. ([2](#Equ2){ref-type=""}) alone), *β*~2~ and *β*~3~ will be zero and *β*~1~ will reflect that fixed learning rate. In contrast, if subjects behave exactly in accordance with the normative model (Eq. ([3](#Equ3){ref-type=""})), *β*~2~ and *β*~3~ will be one, and *β*~1~ will be zero. Thus, we constructed the regression model so that the weights on *β*~2~ and *β*~3~ reflect the degree to which the two normative factors, CPP and RU, drive dynamic learning rates.
This regression model was fitted separately to each participant's data to estimate the influence of each factor on each participant's behavior. We used the residuals of this regression to examine the relationship between subgraph expression and residual updating. To examine the relationship between individual differences in normative learning and functional network dynamics, we used the sum of the regression coefficients on the CPP term (*β*~2~) and the RU term (*β*~3~) as an index of normative learning.
MRI data acquisition and preprocessing {#Sec15}
--------------------------------------
MRI data were collected on a 3 T Siemens Trio with a 32-channel head coil. Functional data were acquired using gradient-echo echoplanar imaging (EPI) (3 mm isotropic voxels, 64 × 64 matrix, 42 axial slices tilted 30° from the AC--PC plane, TE = 25 ms, flip angle = 75°, TR = 2500 ms). There were 4 runs with 226 images per run. T1-weighted MPRAGE structural images (0.9375 × 0.9375 × 1 mm voxels, 192 × 256 matrix, 160 axial slices, TI = 1100 ms, TE = 3.11 ms, flip angle = 15°, TR = 1630 ms) and matched fieldmap images (TE = 2.69 and 5.27 ms, flip angle = 60°, TR = 1000 ms) were also collected. Data were preprocessed with FSL^[@CR37],[@CR38]^ and AFNI^[@CR39],[@CR40]^. Functional data were corrected for slice timing (AFNI's 3dTshift) and head motion (FSL's MCFLIRT), attenuated for outliers (AFNI's 3dDespike), undistorted and warped to MNI space (FSL's FLIRT and FNIRT), smoothed with 6 mm FWHM Gaussian kernel (FSL's fslmaths) and intensity scaled by the grand-mean value per run. Structural images were segmented into gray matter, white matter (WM) and cerebrospinal fluid (CSF) (FSL's FAST)^[@CR41]^.
Constructing time-varying functional networks {#Sec16}
---------------------------------------------
For each run and each participant, BOLD time series were obtained from each of 264 ROIs (diameter = 9 mm) based on the previously defined parcellation^[@CR27]^. ROIs that did not have valid BOLD time series for all runs and all participants were removed, resulting in *N* = 247 ROIs. We visualized these ROIs on the brain using the BrainNet Viewer (<https://www.nitrc.org/projects/bnv>)^[@CR42]^. For each BOLD time series, a band-pass filter was applied with a cutoff of 0.01--0.08 Hz. This low-frequency band has been shown to reflect neuronal activation and neural synchronization^[@CR43]--[@CR45]^. To remove the influence of head motion, a confound regression was implemented to regress out nuisance factors from each BOLD time series. This confound regression included 24 motion parameters (three translation and three rotation motion parameters and their expansion ($\documentclass[12pt]{minimal}
\begin{document}$$[R_tR_t^2R_{t - 1}R_{t - 1}^2]$$\end{document}$))^[@CR46]^, as well as average signals from WM and CSF^[@CR47]^.
In order to construct dynamic functional networks, we defined sliding time windows and calculated Pearson correlation coefficients between ROI time series in each sliding time window. We assigned these coefficients to the first TR in the time windows. To ensure magnetization equilibrium, the first 6 volumes of each run were removed from the analysis. For the rest of the volumes in each run, a sliding window was defined with a 10-TR (25 s) length and 80% overlap across windows. Each run had 106 sliding time windows, leading to *T* = 424 sliding time windows for each participant. Each participant's data thus formed a matrix of dynamic functional networks with dimensions *N* × *N* × *T*. Then, we took each participant's *N* × *N* matrix and unfurled the upper triangle into an $\documentclass[12pt]{minimal}
\begin{document}$$\frac{{N(N - 1)}}{2}$$\end{document}$ vector. By concatenating vectors across all time windows (*T*), we obtained an $\documentclass[12pt]{minimal}
\begin{document}$$\frac{{N(N - 1)}}{2} \times T$$\end{document}$ matrix. Furthermore, we concatenated matrices from *S* = 32 participants to form a $\documentclass[12pt]{minimal}
\begin{document}$$\frac{{N(N - 1)}}{2} \times (T \times S)$$\end{document}$ matrix. To ensure that our approach did not give undue preference to either positively or negatively weighted functional edges, we separated this matrix into two thresholded matrices: one composed of positively weighted edges, and one composed of negatively weighted edges. That is, in the matrix of positive functional correlations between ROI time series, the original negative correlations between ROI time series were set to 0; in the matrix of negative functional correlations between ROI time series, all values were multiplied by −1, and the original positive functional correlations between ROI time series were set to 0. After concatenating the matrix composed of positively weighted edges and the matrix of negatively weighted edges, we had a final $\documentclass[12pt]{minimal}
\begin{document}$$\frac{{N(N - 1)}}{2} \times (T \times S \times 2)$$\end{document}$ matrix **A**.
Clustering functional networks into subgraphs {#Sec17}
---------------------------------------------
We applied an unsupervised machine learning algorithm called NMF^[@CR23]^ on **A** to identify subgraphs **W** and the time-dependent expressions of subgraphs **H**. The matrix factorization problem $\documentclass[12pt]{minimal}
\begin{document}$${\mathbf{A}} \approx {\mathbf{WH}}\; s.t.{\mathbf{W}} \ge 0,{\mathbf{H}} \ge 0$$\end{document}$ was solved by optimization of the cost function$$\documentclass[12pt]{minimal}
\begin{document}$${\rm{min}}_{{\mathbf{W}},{\mathbf{H}}}\frac{1}{2}||{\mathbf{A}} - {\mathbf{WH}}||_F^2 + \alpha ||{\mathbf{W}}||_F^2 + \beta \mathop {\sum }\limits_{t = 1}^{TS} ||{\mathbf{H}}(:,t)||_1^2,$$\end{document}$$where **A** is the functional connectivity matrix, **W** is a matrix of subgraph connectivity with size $\documentclass[12pt]{minimal}
\begin{document}$$\frac{{N(N - 1)}}{2} \times k$$\end{document}$, and **H** is a matrix of time-dependent expression coefficients for subgraphs with size *k* × (*T* × *S* × 2). The parameter *k* is the number of subgraphs, *α* is a regularization of the connectivity for subgraphs, and *β* is a penalty that imposes sparsity on the temporal expression coefficients^[@CR48]^. For fast and efficient factorization to solve this equation, we used an alternative non-negative least square with the block-pivoting method with 100 iterations^[@CR49]^. The matrices **W** and **H** were initialized with randomized values from a uniform distribution between 0 and 1.
A random sampling procedure was used to find the optimal parameters *k*, *α*, and *β*^[@CR50]^. In this procedure, the NMF algorithm was re-run 1000 times with parameter *k* drawn from *U*(2, 15), parameter *α* drawn from *U*(0.01, 1), and parameter *β* drawn from *U*(0.01, 1). The subgraph learning performance was evaluated through four-fold cross-validation. In each fold, twenty-four participants were used for training; eight participants were used for testing and calculating cross-validation error ($\documentclass[12pt]{minimal}
\begin{document}$$||{\mathbf{A}} - {\mathbf{WH}}||_F^2$$\end{document}$). An optimal parameter set should minimize the cross-validation error. We chose an optimal parameter set (*k* = 10, *α* = 0.535, *β* = 0.230) that ensured the cross-validation error in the bottom 25% of the distribution of cross-validation error from our random sampling scheme^[@CR25]^.
Since the result of NMF is non-deterministic, we implemented consensus clustering to obtain reliable subgraphs^[@CR51]^. In this procedure, we (i) used the optimal parameters and ran the NMF 100 times on **A**, (ii) concatenated subgraph matrix **W** across 100 runs into an aggregate matrix with dimensions $\documentclass[12pt]{minimal}
\begin{document}$$\frac{{N(N - 1)}}{2} \times (k\, \times 100)$$\end{document}$, (iii) applied NMF to this aggregate matrix to obtain a final set of subgraphs **W**~consensus~ and expression coefficients **H**~consensus~.
Properties of subgraphs {#Sec18}
-----------------------
Applying NMF yielded a set of subgraphs, or patterns of functional connectivity (**W**), and the expression of these subgraphs over time (**H**). To understand the subgraphs, we first rearranged **W** into *k* different *N* × *N* subgraphs. To understand the roles of cognitive systems in each subgraph, we mapped each ROI to 13 putative cognitive systems from the previously defined parcellation: uncertain, sensory, cingulo-opercular task control, auditory, default mode, memory retrieval, visual, fronto-parietal task control, salience, subcortical, dorsal attention, ventral attention, and cerebellar^[@CR24],[@CR27]^. This yielded a 13 × 13 representation of each subgraph. To show which within-system and between-system edges in this representation were strongest, we applied a permutation test. We permuted the system label for ROIs and formed a matrix with system-by-system edges. This process was repeated 10,000 times to determine which strength of system-by-system edges was above the 95% confidence interval threshold after correction for multiple comparisons.
To characterize the connectivity pattern of each subgraph, we ordered them according to the relative strength of within-system edges versus between-system edges. For each subgraph, we calculated the average strength of within-system edges (edges that link two ROIs that both belong to the same system), and the average strength of between-system edges (edges that link an ROI in one system to an ROI in another system). Then, we subtracted the average strength of between-system edges (*E*~*B*~) from the average strength of within-system edges (*E*~*W*~) and divided this difference by the sum of them ($\documentclass[12pt]{minimal}
\begin{document}$$\frac{{E_W - E_B}}{{E_W + E_B}}$$\end{document}$). We estimated the 95% confidence interval of these measures (average relative strength, average within-system strength or average between-system strength) by implementing bootstrapping 10,000 times.
Next, we investigated the relationship between these connectivity patterns and the temporal expression of each subgraph. As the matrix of functional connectivity was divided in two, with the first half reflecting positive connectivity and the second half reflecting negative connectivity, the temporal expression matrix also had two halves, with the first reflecting positive expression over time and the second reflecting negative expression over time. As there was a strong negative correlation between positive and negative expression, we did all of our analyses on the relative expression (positive expression minus negative expression) of each subgraph^[@CR26]^. Across subgraphs, we calculated Pearson correlation coefficients between the average relative expression and the average within-system strength, average between-system strength, and average relative strength of each subgraph. To determine the significance of the correlation coefficients, we implemented 10,000 permutations of the subgraph labels to form the null distribution of correlation coefficients.
Modulation of subgraph expression by task factors {#Sec19}
-------------------------------------------------
We investigated how fluctuations in the trial-by-trial relative expression of each subgraph were related to four trial-by-trial task factors: CPP, RU, reward, and residual updating. CPP and RU were estimated based on the normative learning model^[@CR1]--[@CR3]^. Residual updating was derived as the residual of the behavioral regression model described above. We examined the effect of these four trial-by-trial task factors together, including all four in a regression model predicting trial-by-trial relative expression. Since NMF yielded values of temporal expression every 2 TRs (5 s), we applied a linear interpolation on the temporal expression values to obtain an expression value aligned with outcome onset on each trial. Regression models were implemented for each participant separately. Regression coefficients were then tested at the group level using two-tailed *t* tests.
Association of individual learning with subgraph expression {#Sec20}
-----------------------------------------------------------
Next, we examined the relationship between subgraph expression and individual differences in the extent to which belief updating followed normative principles. As an index of normative learning for each individual, we used the sum of the regression coefficients on the CPP term (*β*~2~) and the RU term (*β*~3~) in the behavior model^[@CR2]^. This normative learning index reflected the extent to which a participant's trial-by-trial updates were influenced by the two normative factors CPP and RU. We examined the relationship between this index and two aspects of subgraph expression. First, across subjects, we calculated the Pearson correlation coefficient between normative learning and the dynamic modulation of relative expression by normative factors for each subgraph. This dynamic modulation was indexed as the sum of the regression coefficients for CPP and RU from the regression model predicting trial-by-trial relative expression. That is, dynamic modulation reflected how normative factors were associated with the change in relative expression of the subgraph. Second, across subjects, we calculated the Pearson correlation coefficient between normative learning and the average relative expression of each subgraph. To determine the significance of these correlation coefficients, we permuted the participant labels 10,000 times to form the null distribution.
Contribution of specific edges {#Sec21}
------------------------------
We evaluated the contributions of different types of edges to the task effects (influence of CPP, RU, reward and residual updating on subgraph expression across time) and individual differences effects (relationship between normative learning and subgraph expression across subjects). We mainly focused on the contribution of within-system edges and between-system edges. For this analysis, we implemented three types of comparison: Within versus All, Between versus All, and Between versus Within. For Within versus All, we kept within-system edges only and re-estimated task and individual differences effects; then, we compared these effects with the effects estimated using all edges. This comparison showed the change of effects after between-system edges were removed, and thus, this comparison revealed the contribution of between-system edges. For Between versus All, we kept between-system edges only and re-estimated task and individual differences effects. We then compared these effects with the effects estimated using all edges. In this comparison, within-system edges were removed and thus, we examined the contribution of within-system edges. Last, the comparison of Between versus Within is a direct comparison between effects estimated with between-system edges only and effects estimated with within-system edges only. Thus, this comparison examined the different contributions of between-system and within-system edges.
Specifically, for task effects, we examined the change of coefficients in the regression model that investigated the influence of four task factors---CPP, RU, reward and residual updating---on subgraph relative expression. The change was calculated for each participant separately, and the significance of change was then tested at the group level using two-tailed *t* tests. For individual differences effects, we examined the change of correlation coefficients for two types of relationship: the relationship between individual normative learning and dynamic modulation of subgraph relative expression and the relationship between individual normative learning and average subgraph relative expression. To determine the significance of the change of correlation coefficients, we permuted the labels of participants for individual normative learning 10,000 times to form the null distribution of the change of correlation coefficients.
We also investigated the contribution of different functional systems and the contribution of different system-by-system edges. For the contribution of different functional systems, we compared the effects after removing edges of one functional system with the effects estimated with all edges. For the contribution of different system-by-system edges, we compared the effects after removing one system-by-system edge with the effects estimated with all edges. Statistical testing was conducted with the same procedures described in the previous paragraph.
Relationship between regional activity and connectivity {#Sec22}
-------------------------------------------------------
To investigate the relationship between dynamic functional connectivity and univariate activation, we fit a mass univariate GLM. In this GLM, the regressors were the outcome onset and four modulators of outcome onset: CPP, RU, reward and residual updating. These regressors were convolved with a gamma hemodynamic response function (HRF) as well as the temporal derivative of this function. Six motion parameters were also included as regressors.
To examine what aspects of our functional connectivity results could be accounted for by functional coactivation, we used the regression coefficients from the GLM above (including both the main HRF and its temporal derivative for each regressor) to create a predicted BOLD time series. We then repeated the same sequence of analyses described above on this predicted BOLD time series. This predicted BOLD time series captured all fluctuations in activity in that ROI that could be accounted for by the linear effects of CPP, RU, reward, and residual updating. However, this predicted BOLD time series lacked any statistical dependencies between regions that were present in the actual BOLD time series that could not be explained by task-driven changes in univariate activation. Thus, any functional connectivity results we observed with this predicted BOLD time series could be fully accounted for by task-driven changes in univariate activation.
Reporting summary {#Sec23}
-----------------
Further information on research design is available in the [Nature Research Reporting Summary](#MOESM3){ref-type="media"} linked to this article.
Supplementary information
=========================
{#Sec24}
Supplementary Information Peer Review File Reporting Summary
Source Data {#Sec25}
===========
Source Data
**Peer review information** Nature Communications thanks Markus Ullsperger and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
**Publisher's note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
=========================
**Supplementary information** is available for this paper at 10.1038/s41467-020-15442-2.
This work was supported by grants from National Institute of Mental Health (R01-MH098899 to J.I.G. and J.W.K.) and National Science Foundation (1533623 to J.I.G. and J.W.K.). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the paper.
C.-H.K. and J.W.K. designed the study. M.R.N., J.T.M., J.I.G., and J.W.K. designed the task. M.R.N. and J.T.M. collected the data. A.N.K. established the software for NMF. A.N.K. and D.S.B. provided suggestions in network analyses. C.-H.K. implemented all the analyses and visualization and drafted the paper. C.-H.K., A.N.K., D.S.B., M.R.N., J.T.M., J.I.G., and J.W.K. interpreted the results and revised the paper.
The data for the current study are available from the corresponding author upon request. The source data underlying Figs. [4](#MOESM4){ref-type="media"}c--e, [5](#MOESM4){ref-type="media"}, and [6a, b](#MOESM4){ref-type="media"} and Supplementary Figs. [3](#MOESM1){ref-type="media"}, [5](#MOESM1){ref-type="media"}--[7](#MOESM1){ref-type="media"}, [10](#MOESM1){ref-type="media"}--[12](#MOESM1){ref-type="media"}, [14](#MOESM1){ref-type="media"}--[16](#MOESM1){ref-type="media"}, [18](#MOESM1){ref-type="media"}--[20](#MOESM1){ref-type="media"} are provided as a Source Data file.
Code is available at <https://github.com/changhaokao/nmf_network_learning>.
The authors declare no competing interests.
| |
While on the south Florida shot they join other families who vacation there as well each year. All except this year.
As Alice hits her first double-digit birthday (she’s turning ten) some of her best and favorite friends and family are missing. One set of playmates are getting too old to miss homework and can’t make it. One older woman is trapped in a New York City blizzard, and Aunt Kate has brought along her boyfriend and his daughter Mallory. Will she be able to turn this birthday around ? Will she find a junonia shell she has long been searching for ?
Kevin Henkes had us wanting to find a junonia shell !! He has done a brilliant job authoring this very sweet coming of age story. Through this simple and loving story we come to know the inner workings of a ten-year old. He creates with impeccable sensory detail the tug of war between familiarity y and uncertainty, coziness and independence, being self-centered and all caring.
“Because she was looking down and focusing her attention so precisely, Alice lost track of time and of herself. She wouldn’t be able to put it into words, except to say she felt removed from the world. Or just at its edge. At the edge of the wild and beautiful world. She felt small, too. But part of something large. She was happy.”
― Kevin Henkes, Junonia
Something To Do:
Scaphella junonia:
Common names the junonia, or Juno’s volute, is a species of large sea-snail, a marine gastropod mollusk in the family Volutidae, the volutes. This species lives in water from 29 m to 126 m depth in the tropical Western Atlantic.
A junonia shell is a very prized item for shell seekers. It’s not easy to find and some people like Alice Rice spend years looking for one.
“It’s not visions of sugar plums that dance in the heads of Sanibel shell collectors, it’s visions of Scaphella junonia, commonly known as “THE Junonia.” There’s not a day that I roam the shell littered shores of Sanibel that I don’t lust for the discovery of this pride of the Island. The shape and color pattern are deeply etched in my memory bank. At the vaguest hint of this icon all senses go into overdrive and my shell scooping net flies into the water, trapping any moving object in sight.
It was ten years before this shelling dream became reality.” Kathleen Hoover Shell Seeker
The Seashell Sun
We are always collecting things. We are a family of great collectors. Throughout the years we’ve had this ever-growing collection of seashells. They are always bundled up very gently and brought home to be placed in one of our jars, bowls, or glass boxes.
Shells are such a wonderful memory of going to the beach: Walking in the surf, sand between our toes, seaweed round our angles, looking closely for a little house without an inhabitant, wind through our hair, seagulls cawing and diving, all the while looking for our treasures of sea shells in the sand.
When our beach days are but a memory, we refer to those golden times together as Seashell Sun Days.
Why not take some of our collected shells and make a seashell sun to remind us of those happy carefree days?
What you’ll need:
- A small wreath of any material
- Glue
- Small shells
- Medium shells
- Large piece of paper to work on as a mat
Take your wreath and spread a small glue strip along the edge. Place your small shells on the glue.
Continue by placing another small glue strip along the edge. Continue doing this until you have a row of small shells all around the edge of the wreath.
Working in sections, place a generous amount of glue in the remainder of the wreath. Place your larger shells and fill in the gaps with smaller shells.
Work your way around the wreath until it is all filled in.
Be sure to let your Seashell Sun dry completely before picking it up. The shells will fall off it otherwise.
When you’re finished, glue a ribbon to the back to hang it up or just lay it on an end table to use as decoration. Either way you will have a beautiful piece of art to remind you of your wonderful beach days.
This craft originally appeared in the Little Acorn Learning July Enrichment Guide.
FINAL NOTE:
Summer may be coming to a close but there still time to enjoy the outdoors and nature play in your own backyard!
The At Home Summer Nature Camp is a creative, affordable alternative to pricey summer camp, this 8-week eCurriculum is packed with ideas and inspiration to keep your kids engaged and happy all summer long. In one easy-to-follow PDF, you receive eight kid-approved themes, each including ideas and tutorials for: outdoor activities, indoor projects, arts & crafts, recipes, field trips, books & media, and more. Every weekly theme is packed with summer nature fun your family can have right in your own backyard.
Jump Into a Book is very proud and excited to be one of the many “Camp Counselors” and bringing fun, enjoyment, and family activity to your upcoming summer.
If you have grabbed your e-Curriculum yet I highly recommend hopping on board and “jumping” into the fun Click the link below for details and ordering information. Welcome to Nature Summer Camp!
Click here to visit A Natural Nester. | https://www.jumpintoabook.com/2013/08/junonia/?doing_wp_cron=1571236234.7234799861907958984375 |
The girl looked down at her feet and looked back up to see a thin, almost thread like bridge with dozens of paper lanterns tied on almost every nook and cranny. What was once a dusk sky transformed and melted into a blue twilight full of stars that stretched endlessly. On the other side of the bridge was another island of celestial land with a twisted path that lead to a small source of light, no bigger than a candle’s flame. The girl felt this strong urge to get to that light.
“I’m scared to cross the bridge Indigo Orb. It looks so thin and fragile! What if I fall?”
“There’s nothing to fear dear girl. It may seem timid and fragile, but in reality it is so much stronger than you think it is.” The orb nudged the girl’s shoulder forward. “Besides, even if you did, you have wings now!” Feather barked and bolted across the bridge to the other side. The bridge didn’t move a single inch when Feather ran cross it. As soon as he got to the other side, he scratched his ear for a moment, then sat down on the edge of the celestial land and stared at the girl.
The girl took a deep breath and stepped onto the bridge. The bridge did sway for a moment, which caused the girl to death grip the handles of the bridge. Her stomach dropped and her body began to shake. She looked down to see an endless abyss of cosmic clouds and stars between the crevices of the bridge’s ancient planks.
“Keep going dear girl! Don’t stop!” The orb came up to the girl’s side, but did not touch her shoulder. “I’m right here. Keep going.” The girl held her breath for a moment and took a large step onto the bridge and it dramatically swayed side to side.
“I can’t do it!” cried the girl. “I can’t”, tears began to form in the girl’s eyes, blurring her vision.
“Yes you can! You must. Even if it is one step at a time.”
“How was feather able to get across it so quickly?”
“Feather does not know fear and neither does fear know him.”
“That doesn’t make any sense at all Indigo Orb. I mean, the bridge didn’t even budge when he ran across it!”
“Take a breath dear girl….”, the girl took a massive deep breath in,”… and let go.” She exhaled at the end of his words. “It does not matter how long it takes for you to cross this bridge. You will cross it and you will be alright. Let’s go.”
The girl wiped away her tears, closed her eyes, took another deep breath, and steadied her grip onto the railings of the bridge. The girl took a step forward with her eyes closed and the bridge barely swayed. The girl took another deep breath and took another step forward. This time, the bridge did not sway.
“Yes! Yes! You got this!” The girl’s grip on the railings loosened and her steps became more solid. Within a few steps, the girl’s foot touched the cold earth on the other side of the bridge. She opened her eyes and jumped forward. Feather jumped up and down, then ran to the girl. The girl took a few steps forward, opened her eyes, and found that she was on the other side of the bridge.
“YESSS! I did it!” The girl grabbed the spirit orb and held it close to her chest. “Thank you so much for believing in me when I couldn’t believe in myself.” The orb’s warmth against her chest was one she never felt before. A flash of light overwhelmed her vision for a few moments; and within those few moments, a gentle humanoid like apparition with bright rose gold wings kissed her forehead. She opened her eyes to see that the gentle apparition was gone, but the Indigo orb was still tightly wrapped in her arms.
What was that? The twisted road that was once in front of the girl when she made it to the other side of the bridge was gone. What took its place was something the girl would never forget.
Da Tag Cloud
Views
8,733
Copyright
Creative Commons/All rights reserved- all credit of any content that has been posted, featured, or shared (including my own) belongs to their rightful owner(s). You may share posts and content from this site ONLY if you give credit to the original owner(s) (inlcuding myself).
| |
Advocates for the homeless are expanding their protest in Eugene.
The group pushing for the right of homeless people to sleep legally on government property in Eugene has enlarged its efforts by pitching tents on what appears to be county property next to the Lane Events Center.
People affiliated with the group, SLEEPS, on Tuesday afternoon had set up eight tents outside the fairgrounds fence near West 13th Avenue and Adams Street.
For the past two weeks, a SLEEPS-organized group of homeless youths, adults and their dogs has been camping overnight on the county-owned Wayne Morse Free Speech Plaza outside the Lane County Courthouse, on the northeast corner of Eighth Avenue and Oak Street.
Recently, some people began camping in tents on a county-owned strip of land west of Oak Street, next to a county parking lot.
SLEEPS organizers voted last week to expand their encampment because of an increased number of protesters, but they said they remain committed to being peaceful and sanitary at each location.
Tin Man, who declined to give his legal name, said homeless advocates are in talks with Assistant County Public Works Director Howard Schussler about getting a portable toilet near the fairgrounds or moving to a more suitable spot, perhaps the field south of the fairgrounds near 18th Avenue.
�(The county) has their desires and they have their needs that they want to see met, and we want to find that middle ground where we can all meet them,� Tin Man said.
Some neighbors near the fairgrounds said they are generally supportive of the SLEEPS encampment expanding to locations near residential areas, as long as they aren�t being disruptive or damaging to the environment.
According to Tin Man, one neighbor even offered the campers clothes and fresh garden produce, and another neighbor is trying to help get a portable toilet for the site.
Lane County Board Chairman Sid Leiken on Tuesday said county commissioners will meet next week to develop a response.
Leiken, standing outside the Lane County Public Services Building on Tuesday looking at the campers, said he didn�t think the plaza occupants were protesting.
�Protest is one thing, but when I look at it now, I believe it has gone beyond protest. I believe they are campers,� Leiken said.
The encampment is intimidating to residents, especially women and senior citizens, who feel uncomfortable walking through the plaza, he said.
Protesters set up the Free Speech Plaza encampment immediately after Eugene Municipal Judge Karen Stenard on Aug. 15 nullified the citations of 21 people who were cited by Eugene police in January for trespassing on the plaza after its official county-set 11 p.m. closing time.
The judge�s ruling emboldened SLEEPS activists, who are demanding that local officials provide public property where homeless people can sleep, or suspend the city�s ordinance prohibiting overnight camping on all government property � federal state and local � in Eugene.
The judge�s ruling left the county board to decide how to respond to the occupation of the plaza.
Next week, the board will seek to clarify its rules that govern the use of the plaza, Leiken said. He said he preferred not to share details until he and the board receive more advice from county attorneys.
�Maybe the language of our rules is too broad,� he said.
The downtown encampments also have led to pointed exchanges between West Lane County Commissioner Jay Bozievich and Eugene Mayor Kitty Piercy.
In a Monday e-mail to Piercy, Bozievich criticized city officials for not enforcing the city�s overnight camping ban on public property by taking action against the campers who had pitched tents across Oak Street from the plaza. The campers are on a strip of land owned by the county.
Bozievich alluded to the city�s decision last week to fence off areas on the city-owned east Park Block, across the street from the Free Speech Plaza, because people had been defecating at the city park.
�It also felt arbitrary that the city was willing to fence off areas of the Park Blocks but will not enforce the camping ban at our butterfly parking lot diagonally across the street,� he wrote.
Shortly before, City Manager Jon Ruiz, who was in the e-mail loop, had written Piercy and the City Council, saying that Eugene police can cite people for violating the city�s camping ban on public property, but the ordinance does not allow police to arrest or evict someone from public property, including the county�s.
In the interview, Leiken said he was a �little bit frustrated� by the city�s hands-off approach. | https://www.registerguard.com/rg/news/local/30373615-75/county-plaza-homeless-board-eugene.html.csp |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.