content
stringlengths
71
484k
url
stringlengths
13
5.97k
What is addition of angular momentum? It is often required to add angular momentum from two (or more) sources together to get states of definite total angular momentum. For example, in the absence of external fields, the energy eigenstates of Hydrogen (including all the fine structure effects) are also eigenstates of total angular momentum. What are the components of angular momentum? In three dimensions, the angular momentum for a point particle is a pseudovector r × p, the cross product of the particle’s position vector r (relative to some origin) and its momentum vector; the latter is p = mv in Newtonian mechanics. How do spin and angular momentum add? The electronic angular momentum is J = L + S, where L is the orbital angular momentum of the electron and S is its spin. The total angular momentum of the atom is F = J + I, where I is the nuclear spin. How do you add two angular momentums? Classically, the angular momentum is a vector quantity, and the total angular momentum is simply J = J1 + J2. The maximum and minimum values that J can take correspond to the case where either J1 and J2 are parallel, so that the magnitude of J is |J1| + |J2| or antiparallel when it has magnitude ||J1| − |J2||. What is the formula of spin angular momentum? The spin angular momentum of the nucleus and the neutron, and their orbital angular momentum vector , are expressed in units of the reduced Planck’s constant ℏ = h / 2 π . How can we conserve angular momentum? Just as linear momentum is conserved when there is no net external forces, angular momentum is constant or conserved when the net torque is zero. We can see this by considering Newton’s 2nd law for rotational motion: →τ=d→Ldt τ → = d L → d t , where τ is the torque. Why is spin considered an angular momentum? “Spin is the total angular momentum, or intrinsic angular momentum, of a body. In fact, the spin of a planet is the sum of the spins and the orbital angular momenta of all its elementary particles. So are the spins of other composite objects such as atoms, atomic nuclei and protons (which are made of quarks). Is spin angular momentum constant? When an object is spinning in a closed system and no external torques are applied to it, it will have no change in angular momentum. The conservation of angular momentum explains the angular acceleration of an ice skater as she brings her arms and legs close to the vertical axis of rotation. What do you mean by spin angular momentum? Spin is an intrinsic form of angular momentum carried by elementary particles, composite particles (hadrons), and atomic nuclei. Spin is one of two types of angular momentum in quantum mechanics, the other being orbital angular momentum. These are indicated by assigning the particle a spin quantum number. What is an example of angular momentum being conserved? An example of conservation of angular momentum is seen in an ice skater executing a spin, as shown in. The net torque on her is very close to zero, because 1) there is relatively little friction between her skates and the ice, and 2) the friction is exerted very close to the pivot point.
https://riunitedformarriage.org/what-is-addition-of-angular-momentum/
Leveraging Logins and Login Failures to Track Insiders I recently had the chance to explain Tenable’s approach to tracking insiders through authentication logs to a new employee. The conversation went something like this: Q: If I handed you a pile of logs and told you that “Bob” in accounting was an insider threat, what would you do? A: I’d look through all the logs for accounts that Bob had access to and attempt to audit which systems he accessed and possibly what he did. Q: Great! Now if I told you that Bob also had access to other company accounts and he may have used those from his system, how would you look for those? A: I’d look at the logins from Bob’s computer and see if there were any other logins from there that were not Bob. Q: Also great! Now here is the hard part – we are pretty sure there is an insider on our network, but it could be anybody, even Bob! How would you go about looking for any evidence of someone acting as a malicious insider? And of course the answer involved a discussion of how Tenable audits accounts and authentications. Tenable’s approach to this problem has many different components, some provided by Nessus and others provided through log analysis with the Log Correlation Engine (LCE). In the remainder of this blog, we will discuss the various audits, correlations and anomalies that can be used to pinpoint users accessing items in a suspicious manner. We will start out with basic enumeration and tracking of users and then progress to looking for anomalies that could indicate abuse. Nessus Nessus can use credentialed audits to enumerate the active user accounts in the operating system as well as some applications. From this list, an audit can be performed to see if the user accounts are authorized or not. Nessus also performs many different types of tests on the various accounts, including checking to see if the account has ever been used, if the password has been reset and so on. Below is a screen shot of a SecurityCenter dashboard component that shows a LAN segment with a variety of pass and fail tests for Windows user accounts. Tracking User Provisioning As part of the Log Correlation Engine’s log normalization process, any type of log where a user is added to a system, removed or changed is recognized. This allows long term tracking of new users and data for real-time dashboards. In the screen shot below, multiple servers have been added to a dashboard and then for each server, indications of users accounts being added, removed or modified in the last 25 days are shown. This same data can be displayed as a continuous graph over the same 25-day period. Tracking Account Activity Both the Nessus audits and the LCE tracking of user provisioning events are very useful to determine when accounts were provisioned, but they don’t specifically tell you when an account was used or has been potentially abused. The LCE normalizes authentication logs (both authorized and failed attempts) for hundreds of applications and devices. Below is a screen shot of a typical 24-hour window of login activity at a small IT business. This screen shot shows a variety of Windows and Unix login events. This data is further enhanced by LCE’s anomaly detection. For any set of events, including logins, LCE will identify normal usage patterns and then generate an alert if there is a change. When computed for logins or login failures, these anomalies provide indicators of large changes in normal authentication activity. Following is a screen shot of a dashboard that plots logins and login failures above the actual number of logins and login failures that were anomalistic over a one week period. In the login chart, the daily logins ranged from close to zero to 60,000. Login failures also spanned from close to zero through 20,000. However, looking at the anomalies chart for as many authentication events that were logged, there were only around a dozen anomalies logged each day. The anomalies don’t directly indicate insider activity, but the potential is there. They could indicate a new user forgetting a password and trying to authenticate many multiple times. The point of the logs is they provide an immediate list of issues to audit and analyze further. Another very useful report that can be performed by SecurityCenter is to show the list of user accounts per system as reported by Nessus and then show the list of active user accounts observed through logs by the Log Correlation Engine. For example, on one of my home test Windows 7 laptops, LCE has observed these user names: Combining the report with the account names seen in use along with the system accounts installed can help identify accounts that have not been in use and may not be needed. Advanced User Tracking Besides tracking actual authentication events, LCE will learn much more about the local network and where user accounts are used. The following list of events is generated by LCE as it learns about each new user and the unique relations to the network and systems. - New_User – LCE has learned about a user account in use on a specific system. - New_User_Source – LCE has learned about a source system where an account has originated from for the first time. - New-Network-User – LCE has observed an authentication log in which the relationship between a user ID and the IP address associated with it has changed. - VPN_Login_From_Unusual_Source – LCE has seen a VPN authentication occur from a source that is not normal for the user ID. Each of these is a potential event to investigate, but understanding them in context is more important. For example, if a user’s system was compromised by malware and then their credentials used to login to a new system not normally accessed, the New_User_Source event will fire. Perhaps various systems at the NSA would have had logs like those shown below for the first time that Ed Snowden used his credentials to access these systems: Hunting insiders for abuse patterns is very difficult. In the Ed Snowden case above, one would be correct to state that since this account was already created and he logged in with his access, he really wasn’t “hacking”. What really is happened is that his accounts had too much access and once he started walking through all of the available systems, this could have set off an anomaly alert. Conclusion Tracking insiders through log analysis and account auditing is a key component of the SecurityCenter Continuous View solution from Tenable. By combining scanning, sniffing and logging into one platform, detecting insider threats and suspicious users is possible through the use of anomaly detection and auditing. To learn more about Tenable’s ability to watch employees and detect suspicious activity, please contact our sales team.
https://www.tenable.com/blog/leveraging-logins-and-login-failures-to-track-insiders
An audit log is a detailed, chronological record of all changes to an operating system (OS), application, or device, with the purpose of tracking system operations and use. Modern IT systems are extremely complex, and often require a significant amount of oversight. When performance begins to lag, errors manifest themselves, or security or compliance issues arise, knowing who has accessed the system and what actions they have taken may be essential. Also called audit trails, audit logs provide this relevant information. Acting as a record of events within a computer system, an audit log allows auditors and IT personnel to trace user actions. This can provide vital insight into how the system is being used, where problems may be occurring, and what security weaknesses might be present and exploitable. Additionally, regulatory compliance may require that audit logs be maintained for a specific length of time. As with most other aspects of business, detailed records that are easily accessible by authorized individuals provide a number of clear advantages. And, for those organizations that work within industries governed by compliance frameworks, audit logs are more than simply beneficial; they’re a standardized requirement. Here, we take a closer look at some of the most-common advantages of maintaining audit trials: Common regulatory frameworks—such as PCI DSS and HIPAA—require the use of audit logs to prove compliance. These function as official business records, giving auditors essential resources for inspecting and approving IT systems, and helping protect businesses from potential fines or other penalties. The key to effective IT security is reliable knowledge. Audit trails offer detailed records related to all activity within the IT system. This includes not only standard activity, but also any activity that may violate data-security practices, include unauthorized data access, or even indicate a security breach by an outside threat actor. Correctly used, audit logs help IT professionals identify possible security vulnerabilities, identity and remediate data misuse, and respond quickly to emergent security events. And, given their official nature, these logs may also be used as evidence in court. Understanding how users are interacting with a system is the first step to improving those interactions. By tracking user activity, administrators and other authorized monitors gain valuable insight into issues related to performance, productivity, efficiency, and more. At the same time, they can more quickly identify and resolve potentially problematic issues before they have a chance to spiral out of control. Regulators, partners, vendors, and even customers want to know that a business is secure before they invest their time or resources into it. A clear audit trail details what security measures the organization is taking to ensure data privacy. Using audit logs as part of a risk management framework may help demonstrate that a business is a low-risk opportunity. Given the many benefits associated with maintaining reliable audit logs, it’s no surprise that these digital records are often applied across a range of use cases. These include the following: Businesses that require compliance certification must have complete digital records of how their systems function and are being accessed and used. An audit trail gives auditors the information they need to ensure that the organization is operating within acceptable parameters and without any problematic anomalies. Combined with real-time tracking systems, audit logs can help IT specialists recognize abnormal and/or illegal actions occurring within the system. Audit logs give threat detection the evidence and insight it needs to quickly identify potential security issues as they arise. In the event that an organization is involved in legal action as a result of its data or IT systems, audit logs may be used as forensic evidence. This can help a company prove that it was operating within established compliance guidelines, as well as be used as evidence against those who may have been taking illegal action within the system. System and organization controls (SOC) reports give companies the confidence to work with service providers, showing that they are operating in a compliant manner. Audit logs make SOC reporting easier and more complete, helping vendors clearly establish their credibility and trustworthiness. Audit logs allow businesses to place IT-system activities under a microscope. This makes it possible to quickly discover and resolve even low-impact bugs, and also simplifies recovery following a security intrusion. To provide the above benefits, an audit log must include a number of essential details. These details help establish a clear picture of the IT environment and the circumstances associated with every action within the system. As such, a reliable audit log must include: A unique identifier associated with an individual terminal that can be used to identify the source of the system access. A unique identifier associated with a specific user that can be used to identify who is accessing the system. Reliable timestamps indicating when systems actions are being attempted or performed, as well as the overall time duration of system access. Information detailing which networks a user is attempting to access (even if the attempt is unsuccessful). Information detailing which systems, data, and applications a user is attempting to access. Information detailing which specific files a user is attempting to access. Detailed information describing any changes made to the system, network, applications, or files. Details on which system utilities a user is accessing and how they are being used. Information related to any security alarms or notifications that may be activated by the user. A clear record of all system notifications triggered by the user while in the system. Thankfully, long gone are the days when access had to be manually logged and reviewed; today, most relevant technology solutions include the automatic creation of audit trails, recording and storing data for every action performed in the system, without exception. That said, there are still certain hurdles organizations may face when implementing a working log management strategy. These challenges may include the following: Audits logs consist of large amounts of data, and the more processes, systems, devices, and actions being tracked, the more storage space is needed. This may create storage problems for businesses, increasing the necessary storage investment—either with regard to ensuring your SaaS platform provider agreement includes ample data storage space, or if you haven’t made the leap to a modern GRC solution, then setting up more in-house servers, or paying more for off-site storage space. Although one of the primary advantages of an audit trail is that it allows for increased security, the audit log itself may represent a security vulnerability. When too many people have access to the audit log information, sensitive data captured during the audit may become exposed. One way to mitigate this is to establish persona-based landing pages and reports to view your audit activities and engagement tasks in real-time. Likewise, audit logs themselves may be less secure than the systems they monitor, giving threat actors an easier path to sensitive data. Accessing them through a portal, from a secure SaaS platform, helps mitigate this risk. Even within a single organization, disputes can arise over how long a digital record should be maintained. Some laws and regulations may establish a minimum duration (such as six months to seven years). Beyond that, it is up to the businesses to decide how long to store the audit log data before disposing it. The further back the audit trail goes, the better protected the organization will be, but keeping audit data longer than needed may represent an unnecessarily large amount of spend in terms of storage costs. Overly-thorough audit logs may slow down system responsiveness. Similarly to the previous point, IT decision makers might have to work to find the right balance between security and system efficiency. Organizations that rely on a number of different systems, devices, applications, etc., may encounter problems, as each log source produces its own audit log (or, in some cases, produces multiple audit logs). This creates not only data-storage issues as mentioned above, but can also lead to inconsistent reporting, possibly making it difficult to link or reconcile audit trails across multiple sources. Occasionally, log analysis may be treated as a low-priority task. As such, those who are responsible for carrying it out may not receive the right training or have access to effective tools. As a result, analyses may be rushed, incomplete, inaccurate, or simply not be performed at all except in response to a data breach or other emergent situation. ServiceNow Governance, Risk, and Compliance (GRC) brings audit management capabilities to a single, centralized location. Relevant data is automatically collected and analyzed, audit trails are established, and compliance and security issues are quickly identified. Learn more about Audit Management in ServiceNow GRC, and get the insight you need to ensure your systems and users are working together optimally. ServiceNow makes the world work better for everyone. ServiceNow allows companies of all sizes to seamlessly embed risk management, compliance activities, and intelligent automation into your digital business processes to continuously monitor and prioritize risk. ServiceNow Risk solutions help transform inefficient processes and data silos across your extended enterprise into an automated, integrated, and actionable risk program. You can improve risk-based decision making and increase performance across your organization and with vendors to manage the risk to your business in real time. And make risk-informed decisions in your daily work —without sacrificing budgets. Manage risk and resilience in real time with ServiceNow.
https://www.servicenow.com/products/governance-risk-and-compliance/what-is-an-audit-log.html
The California Immunization Registry (CAIR) is a secure, confidential, statewide computerized immunization information system for California residents. The CAIR system consists of 9 distinct regional immunization registries (mostly multi-county regions). Each registry is accessed online to help providers and other authorized users track patient immunization records, reduce missed opportunities, and help fully immunize Californians of all ages. Currently, 7 of the 9 CAIR regions use the same software, also called 'CAIR' and are supported by a centralized Help Desk. Local support staff are also available to assist users. The remaining 2 regions, San Diego and the greater San Joaquin Valley utilize different software to access patient records. In addition, Imperial operates a county immunization registry that is not currently part of the CAIR system. Contact information for these and other regional registries is on the CAIR Regions page. How the CAIR system works: - California law allows health care providers to share patient immunization information with an immunization registry as long as the patient (or patient’s parent) is informed about the registry, including their right to ‘lock’ the record in CAIR so that immunization information is not shared with other CAIR users (though the data remains available to the patient’s provider). CAIR ‘Disclosure’ and ‘Decline to Share’ forms are available on the CAIR Forms page. - Participation in CAIR is voluntary and is open to healthcare providers, schools, child care facilities, county welfare departments, family child care homes, foster care agencies, WIC service providers, and health care plans. To participate, users must sign a confidentiality agreement stating they will maintain the confidentiality of the patient immunization information and will only use the information to provide patient care or to confirm that childcare or school immunization requirements have been met. - Because CAIR currently exists as 9 separate registries, an authorized user can only access patient data within their defined geographic region (see above). In the coming years, CAIR will integrate its existing regional databases so that immunization data for patients residing anywhere in the state will be accessible to any CAIR user in California. - Health care providers and other authorized users log in to the registry using a user ID and password. In addition to accessing patient immunization information, users can utilize the integrated vaccine algorithm to determine vaccinations due, enter new patients or vaccine doses administered, manage vaccine inventory, run patient or inventory reports, or run reminder/recalls on their patients. New patients or vaccine doses can either be entered directly into CAIR using the web interface or can be submitted electronically as aggregated data files (e.g. exported from their EHR systems) for upload to CAIR. Many prominent private and public health care entities already share data electronically with CAIR!
https://cairweb.org/about-cair/
What is OpenStack? OpenStack is an open source software platform, originally developed in 2010, that allows organizations to build and maintain public and private clouds. OpenStack makes it possible to create cloud infrastructure with compute, storage, and networking components in an Infrastructure-as-a-Service (IaaS) model. It is designed for flexibility, massive scalability, and enterprise-grade security. OpenStack allows cloud operators to deploy virtual machines (VMs) that handle various cloud management tasks. It provides an infrastructure that allows cloud users to quickly and easily provision and deprovision cloud components and resources, with the ability to quickly scale resources up and down to match their current needs. All aspects of OpenStack can be accessed programmatically via APIs, to facilitate cloud automation. Recent OpenStack Security Vulnerabilities OpenStack is a mission critical system used to deploy enterprise resources at large scale. For this reason, it is a prime target for attackers. Because OpenStack projects are open source, attackers can study their code and discover vulnerabilities relatively easily. Here are a few severe vulnerabilities discovered in OpenStack over the past few years, codified as common vulnerability enumerations (CVEs). Keep in mind that many more vulnerabilities exist, some of which may not have been discovered yet. The biggest danger is a new “zero day” vulnerability that could exist in your OpenStack deployment right now, unbeknownst to you or OpenStack contributors. This requires a proactive approach to securing applications in the OpenStack toolset, as I discuss in the following section. CVE-2020-26943 This is an issue with the OpenStack blazar-dashboard component. Users who are granted access to Horizon’s Blazar dashboard can trigger code execution on the Horizon host, due to the use of the Python eval() function. This can result in unauthorized access to the Horizon host compromise of the Horizon service. The vulnerability affects all setups Affected versions: 1.3.1, 2.0.0, 3.0.0, and earlier Affected setup: OpenStack users running the Horizon dashboard using the blazar-dashboard plugin. CVE-2021-20267 A flaw was found in the default Open vSwitch firewall rules for openstack-neutron. Anyone controlling a server instance connected to a virtual switch can send a carefully crafted packet and spoof the IPv6 address of another system on the network, causing denial of service (DoS). Another possibility is that traffic destined for other destinations may be intercepted by an attacker. Affected versions: Openstack-neutron versions 15.3.3, 16.3.1, 17.1.1, and earlier. Affected setup: Only deployments using the Open vSwitch driver. CVE-2021-38598 Hardware address spoofing can occur when using the linuxbridge driver with ebtables-nft on Netfilter-based platforms. Anyone controlling a server instance connected to a virtual switch can send a carefully crafted packet and spoof the hardware address of another system on the network. This can result in a denial of service (DoS) or, in some cases, interception of traffic intended for another destination. Affected versions: Openstack-neutron versions prior to 16.4.1, 17.x before 17.1.3, 18.x before 18.1.1. Affected setup: Linuxbridge driver with ebtables-nft on Netfilter-based platforms. CVE-2021-40797 A problem was found in the routing middleware for openStack-neutron. Authenticated users can make API requests that include non-existent controllers, which can cause API workers to consume excessive memory, resulting in poor API performance or denial of service. Affected versions: Openstack-neutron versions prior to 16.4.1, 17.x before 17.2.1, 18.x before 18.1.1. Affected setup: OpenStack-neutron. CVE-2022-23452 An authentication flaw was found in openstack-barbican. This vulnerability could allow anyone with an admin role to add a secret to another project container. This vulnerability could allow an attacker on the network to consume protected resources and cause denial of service. Affected versions: Openstack-barbican up to and excluding version 14.0.0. Affected setup: Using the openstack-barbican secrets management REST API OpenStack Security: 4 Critical Best Practices 1. OpenStack Authentication Authentication is critical in a production OpenStack deployment. The OpenStack identity service, Keystone, supports multiple authentication methods, including username and password, LDAP, and external authentication methods. Upon successful authentication, the identity service provides the user with an authentication token for subsequent service requests. Transport Layer Security (TLS) uses X.509 certificates to enable authentication between service accounts and human users. The default mode of TLS is for server-side authentication only, but certificates can also be used for client-side authentication. Use multi-factor authentication for privileged user accounts accessing cloud networks. The identity service also supports external authentication services through the Apache web server. Servers deployed in OpenStack can also use certificates to enforce client authentication. By using strong authentication, you can protect cloud users from brute force attacks, social engineering, phishing attacks, account takeover, and many other cyber threats. 2. OpenStack Backup and Recovery In a cloud deployment, machines eventually become outdated, software needs upgrading, and vulnerabilities are discovered. There must be a convenient way to apply changes to the software or make changes to the configuration. Backup and recovery is an important part of an OpenStack security strategy. To keep backups secure, ensure only authenticated users and authorized backup clients can access the backup server, and always store and transmit backups with data encryption options. Use a dedicated, hardened backup server. Backup server logs should be monitored daily and should be accessible only to a small number of people. Finally, test data recovery regularly. In case of a security compromise, a good way to recover operations is to terminate running instances and restore them from images stored in a protected backup repository. 3. Secure OpenStack API Endpoints Any process using the OpenStack cloud starts by querying the API endpoint, making API security a key challenge for OpenStack deployments. Although public and private endpoints present different challenges, they are both high-value assets that can pose significant risks if compromised. You can force specific services to use specific API endpoints. Any OpenStack service that communicates with the APIs of other services must be explicitly configured to access the appropriate internal API endpoints, and should not have access to other endpoints. API endpoint processes should be isolated. It is especially important to isolate any API endpoint that is in the public domain. If possible, API endpoints should be deployed on different hosts for greater isolation. 4. Forensics and Incident Response Log generation and collection are important components for securely monitoring an OpenStack infrastructure. Logs provide visibility into the day-to-day operations of administrators, tenants, and guests, as well as the activity of compute, networking, storage, and other cloud components. Logs are important for proactive security and ongoing compliance efforts, and provide a valuable source of information for incident investigation and response. For example, analyzing access logs for identity services or external authentication systems can alert you to login failures, their frequency, the source IP, and the context in which anomalous access request occured. Identifying these types of anomalous events makes it possible to identify and respond to security incidents. You can then take action to mitigate potentially malicious activity, such as blocking IP addresses, recommending stronger user passwords, or disabling dormant user accounts. Here are some important considerations when implementing log aggregation and analysis: - Have a way to detect that no logs are being generated—this indicates service failure or an intruder hiding the trace by temporarily turning off logging or changing the log level. - Gain visibility over application events—in particular, unplanned start and stop events should be monitored and investigated for possible security impacts. - Monitor operating system events—any OpenStack service system should be monitored for events like user logins and reboots. These provide insights into security issues and also misconfiguration or incorrect system usage. - Detect high load—if the logs indicate high load on a management component, deploy additional servers for load balancing to ensure high availability. - Other actionable events—watch for other important events such as bridge failures, refreshes of compute node iptables, and unreachable cloud instances which can impact end user experience. Conclusion In this article, I covered some of the major vulnerabilities discovered in OpenStack project in recent years, and provided several best practices that can help you take a proactive approach to OpenStack security and prevent the next attack: - OpenStack authentication—implement strong authentication for OpenStack services using TLS and multi-factor authentication. - OpenStack backup and recovery—use a dedicated, hardened backup server for your critical OpenStack components. - Secure openStack API endpoints—isolate API endpoints as much as possible and ensure that OpenStack service can only access endpoints they are explicitly allowed to use. - Forensics and incident response—ensure you can collect, visualize, and act on logs from OpenStack systems to identify security incidents. I hope this will be useful as you improve the security posture of your OpenStack deployments.
https://superuser.openstack.org/articles/openstack-security-a-practical-guide/
Best practices for keeping sensitive business data secured at all times Seemingly every day, another company is making headlines for all the wrong reasons when its internal records, including sensitive user information, is exposed by a data breach or lost due to a data disaster. Sometimes it’s the result of a skilled and determined outsider attacker or an unavoidable natural event. But, in other cases, a simple lack of basic security protocols and recovery planning led to an expensive and embarrassing mistake. Defend Your Data 25,575 records were lost or exposed in the average data breach event in 2019, at a cost of $3.92 million, according to IBM. That’s a 1.6% increase from 2018 and a 12% jump over the last five years. As for overall data disasters, a study by the University of Texas found that 43% of companies that suffer a catastrophic data loss will never reopen. The list of risk factors is long and growing: - Hackers / Cybercrime - Poorly Trained Personnel - Power Outages - Industrial Espionage - Rogue Insiders - Natural Disasters In 2020, the value of data and the necessity that sensitive and personally identifiable user information be securely accessible online are clear to most companies. So too is the need to ensure that data never falls into the wrong hands or is inadvertently lost. But, with the rise in managed service options for IT, how can companies be sure their technology partner is safeguarding their data? Here are 10 data security best practices to expect from IT professionals: 1. Backup Early and Often Backups are a vital component of a data recovery plan. Properly executed, a business that suffers a data loss can set things right rapidly and suffer a minimum of costly downtime. With the rise in ransomware attacks (malicious software that encrypts your data and demands a ransom to unencrypt it), the need for redundant backups covering a wide period of time has grown more important. Even when the ransom is paid, the attackers don’t always free the hostage data, so it’s vital to have a backup that predates the attack. 2. Be Proactive, Not Reactive The International Assembly of Privacy Commissioners and Data Protection Authorities promulgates standards for privacy protection. Among its foundational principles is a recommendation to take action before issues arise, not after. That’s good advice for safeguarding privacy and data in general. So, use up-to-date network hardware, check all the settings upon installation, change all the default passwords, and install updates and patches promptly and routinely. Also, enable firewalls and virus scanners, and keep a close eye on third-party requests to access data. 3. Stay Alert for Phishing Phony emails, fake landing pages, and even spoofed phone numbers can all be used by hackers to trick unwitting people into divulging their passwords, account numbers, or other sensitive information — which they then use to gain unauthorized access and do even more damage. The FBI’s Internet Crime Complaint Center reported that in 2019 alone business email compromise was linked to $1.7 billion in losses. The best defense against phishing attacks is education. All personnel should be taught to spot and report suspicious messages, and if there is even the slightest doubt, don’t click anything, and try to contact the claimed sender by another means to confirm. 4. Log Everything Moving data around entails a number of risks, but one of the advantages of a digital workflow is the ability to track and record all activity. That includes who is logging in and when, what applications are being used, and what data is being accessed, modified, or transmitted. Keeping good logs helps security professionals discover unusual patterns of behavior and ferret out malicious actors that may be hiding under their noses — or just authorized users that aren’t adhering to best security practices. 5. Encrypt All Devices It’s not that easy to lose a server room, but individual laptops and mobile devices are lost or stolen every day, and when there is user information and authorization credentials stored on them, the fallout can be much worse than just replacement costs. Make sure all devices that may contain sensitive data are locked down and encrypted to prevent tampering or unauthorized access. 6. Harden Your Authentication One of the most easily avoided data breach risk vectors is also among the most common. When users reuse the same password many times over, use weak or default passwords, or share their passwords openly, they are undermining all of an organization’s expensive and well-thought-out security measures. Encourage everyone to use multifactor authentication (e.g. SMS confirmation, biometrics, or physical security keys in addition to the password), change passwords routinely, and use a password generator to create strong and random passwords. 7. Protect Data Centers Modern cybercriminals do most of their dirty work over the internet, but that doesn’t mean the physical space where data is stored can be left unattended. Break-ins, though rare, do happen, and natural and man-made disasters can also occur. Guard server rooms and facilities with security personnel, CCTV monitoring, and biometric access controls. Ensure backup power is available from an onsite generator, and closely observe environmental controls because heat and humidity rarely mix well with delicate technologies. 8. Use the Principle of Least Privilege The goal of an effective data security framework is to provide access only to authorized users, and, importantly, only as much as access as they actually need. Too many people with high level access is a recipe for trouble. Each new user should be given the fewest privileges necessary to perform their function. It’s much smarter to elevate their privileges if requested/required rather than immediately open up full administration control. Additionally, once access to sensitive data is no longer necessary, immediately revoke that authorization — particularly for individuals leaving the company. 9. Conduct Penetration Testing A company’s current security protocols and data protection plans might be sufficient to thwart attackers and prevent breaches, but it’s always nicer to find out in the course of your own due diligence, instead of suffering an actual breach. Penetration testing puts your system to the test by simulating an attack. It is useful for identifying weaknesses and vulnerabilities so that a full risk assessment can be performed and the system can be hardened. 10. Don’t Lock the Users Out With so much focus on preventing unauthorized users, it’s sometimes forgotten that the organizations holding business data also have an obligation to ensure authorized users have uninterrupted, 24/7/365 access to their data and documentation for all networks and systems. Network management and data safety protocols are all in place to serve the end users. The user credentialing system also requires constant monitoring to keep data where it belongs: in authorized systems, getting work done. Don’t Wait Until Something Goes Wrong Most business leaders (94%) report they are extremely concerned about data breaches, as reported by a 2019 Forbes poll. Yet, 76% admitted they didn’t have a robust plan in place to deal with a major data incident. In the wake of a breach, most companies quickly try to put in place protocols and tools to prevent a recurrence, but, according to the analysts at Forbes, if just 10-20% of that post-breach budget was spent beforehand, the incident could have been avoided. So, lock down your data, train your team to handle it with best practices, and vet your managed service provider and other third parties with access to your data to confirm they are doing the same. D2 has been managing and protecting business data for over two decades. We rely on leading-edge technologies and the latest security best practices to safeguard sensitive information. Contact D2 today to learn how our collaborative approach and practical technology strategies can drive the efficiency, security and productivity goals of your business.
https://d2integratedsolutions.com/2020/06/29/how-should-an-outsourced-it-firm-manage-my-companys-data-and-user-information/
How to Use Zero Trust to Meet NIST SP-800-171v2 Access Control Practices for Remote Data Access The Zero Trust Data Access architecture of FileFlex Enterprise can greatly aid in compliance with NIST access control requirements as outlined in SP-800-171v2 for remote access and sharing. Estimated reading time: 5.5 minutes What is NIST SP-800-171? The National Institute of Standards and Technology (NIST) has put together a unified standard (NIST SP 800-171) to better defend the vast attack surface of the federal government supply chain. It provides federal agencies recommended security requirements to be applied to their contractors for the protection of confidential information (Controlled Unclassified Information or CUI) located and transmitted by those contractors. The security guidelines outlined in NIST SP-800-171 are intended for use by federal agencies in their agreements with contractors and other non-federal organizations. How FileFlex Enterprise Meets NIST Access Control Requirements for Remote Data Access This blog looks at FileFlex Enterprise and shows how it meets the published best “Access Control” practices for remote data access outlined in NIST SP-800-171v2. |NIST SP 800-172v2||Section Summary||Support Compliance?||How FileFlex Helps With Compliance| |3.1.1||Limit system access to authorized users, processes acting on behalf of authorized users, and devices (including other systems).||Yes Supports Compliance||FileFlex delivers this requirement within its secure ZTDA platform down to the file & folder level micro-segmentation. Users are bound to accounts, accounts are authorized and managed by administration for all data access controls. Connector Agents installed within any secure, firewalled environment act on behalf of its authorized users.| |3.1.2||Limit system access to the types of transactions and functions that authorized users are permitted to execute.||Yes Supports Compliance||FileFlex administration can manage the data access privileges for every user or user group they assign content repositories to. This includes all File Management privileges and functions.| |3.1.3||Control the flow of CUI in accordance with approved authorizations.||Yes Supports Compliance|| Flow control restrictions include: | Keeping export-controlled information from being transmitted in the clear to the Internet: FileFlex provides encryption of data in motion in 3 ways: 1. Encrypted Micro-tunnels (per transfer) 2. Data Encryption (before entering micro-tunnel) 3. Intel SGX Hardening (chip to chip encryption), Blocking outside traffic that claims to be from within the organization: FileFlex provides a secure, controlled environment for users to facilitate their data access requirements. Every user interaction flows through the Fileflex Policy Server for authentication, access and permissions. Restricting requests to the Internet that are not from the internal web proxy server: FileFlex can be configured to work with internal web proxy servers and function as such. Limiting information transfers between organizations based on data structures and content. - Out of scope Organizations commonly use flow control policies and enforcement mechanisms to control the flow of information between designated sources and destinations (e.g., networks, individuals, and devices) within systems and between interconnected systems. Varying deployment models of Fileflex allow for flow control requirements to be met in dynamic landscapes. |3.1.4||Separate the duties of individuals to reduce the risk of malevolent activity without collusion.||Yes Supports Compliance||From within Fileflex, both the duties of end-users and administrators are separated based on the role assigned and user type.| |3.1.5||Employ the principle of least privilege, including for specific security functions and privileged accounts.||Yes Supports Compliance||FileFlex uses least privilege access on all accounts. Privileges can be managed on all accounts individually or by group.| |3.1.6||Use non-privileged accounts or roles when accessing nonsecurity functions.||Yes Supports Compliance|| Regarding data access functions, role-based access control governs privileged vs. non-privileged access. | Otherwise out of scope. |3.1.7||Prevent non-privileged users from executing privileged functions and capture the execution of such functions in audit logs.||Yes Supports Compliance|| Non-privileged accounts cannot execute privileged functions within the FileFlex system. | All individual user functions are logged within FileFlex. |3.1.8||Limit unsuccessful login attempts.||Not Supported||Available in a future update coming soon.| |3.1.9||Provide privacy and security notices consistent with applicable CUI rules.||Yes Supports Compliance||Privacy and security notices can be displayed as the user logs into their account. Two-factor authentication can also be implemented for additional security notification functions.| |3.1.10||Use session lock with pattern-hiding displays to prevent access and viewing of data after a period of inactivity.||Yes Supports Compliance||The FileFlex interface utilizes session lock security that is enabled after a period of inactivity.| |3.1.11||Terminate (automatically) a user session after a defined condition.||Yes Supports Compliance||Device access and Bandwidth controls, once set will automatically terminate the session and, or data access.| |3.1.12||Monitor and control remote access sessions.||Yes Supports Compliance|| All FileFlex remote access sessions are logged and tracked for monitoring purposes down to the user. | All remote access sessions are controlled through administration. Local connector agents act on behalf of remote access sessions, keeping remote connectivity at bay. |3.1.13||Employ cryptographic mechanisms to protect the confidentiality of remote access sessions.||Yes Supports Compliance|| All sessions are double encrypted from end to end using encrypted micro tunnels for communication and transfer. | Intel SGX integration utilizes secure enclaves within the chipset itself, for encryption key generation providing even further levels of cryptography at the deepest level – within the silicon itself. |3.1.14||Route remote access via managed access control points.||Yes Supports Compliance||All FileFlex transmissions flow through the FileFlex Policy server for authentication, permission, monitoring and Zero Trust operational purposes, for every single transaction.| |3.1.15||Authorize remote execution of privileged commands and remote access to security-relevant information.||Yes Supports Compliance||Fileflex authorizes users to be able to execute privileged commands and facilitates remote access to data of any type or classification.| |3.1.16||Authorize wireless access prior to allowing such connections.||N/A||Not Applicable| |3.1.17||Protect wireless access using authentication and encryption.||N/A||Not Applicable| |3.1.18||Control connection of mobile devices.||Yes Supports Compliance|| FileFlex supports mobile device connection control. | Permit-by-exception device whitelisting, controlling and validating all device connections. Any device not implicitly allowed in the device control access list will not be granted access. |3.1.19||Encrypt CUI on mobile devices and mobile computing platforms.||Yes Supports Compliance|| FileFlex encrypts all CUI during transit and enables access to CUI without having to move it from its source location. If data is downloaded using FileFlex enterprise to a user’s device, that data is will remain with whatever encryption it pre-existed with. | Otherwise: Out of scope. |3.1.20||Verify and control/limit connections to and use of external systems.||Yes Supports Compliance||Every connection whether internal or external is always verified through the user's account login, prior to the access of the FileFlex system. Connections are account-based and limited to one connection per account at any given moment in time.| |3.1.21||Limit the use of portable storage devices on external systems.||Out of Scope||Portable storage devices are not accessible through a FileFlex enterprise system.| |3.1.22||Control CUI posted or processed on publicly accessible systems.||Partial Compliance|| FileFlex provides ultra-secure, zero trust access to any data it is set up to interact with, from within the FileFlex system itself. It can be set to never allow download or upload of data if required. | Otherwise: Out of scope.
https://fileflex.com/blog/nist-sp-800-171v2-access-control-practices-for-remote-data-access/
Step 1: Create ‘depta__user’ and ‘deptb_user’ as ‘User’ role within Dremio. These users will only be able to Query the datasets to which they should have permissions. Step 2: Create a service account (in this case ‘tpcds_service’) as the generic access for the specific datasource or dataset. Step 3: Specify that the ‘tpcds_service’ user has access to a specific data source (or dataset). In this case we are permitting only queries on the ‘tpcds-Hive3.default’ datasource directory. Step 4: Setup the inbound impersonation policies and confirm that the exec.impersonation.unbound.policies have been updated. The cluster is now enabled to support Inbound Impersonation for ODBC related queries using the depta_user and deptb_user. In the following sections are examples on how to use Inbound Impersonation for both Python and BI tools such as Tableau (other ODBC related tools will follow a similar pattern). Python ODBC Query example Leveraging the above configuration steps we will go through an example where we run a Python program containing the Python requesting user and its matching delegated user. |ODBC Property||Value||Comments| |UID||Depta_user||This user does not require permission to the targeted dataset. But, it MUST already have been mapped to the specified DelegationUID before running the ODBC related program or you will receive a message "pyodbc.InterfaceError: ('28000', u" [Dremio][Connector] (40) User authentication failed. Server message: User authentication failed"| |DelegationUID||tpcds_service||The DelegationUID specified user must have sharing rights associated with the requested Dremio table in the request or you will see the “User authentication failed” message. This delegationUID also needs to have been mapped to the uid ODBC property.| Step 1: Create your Python ODBC program as shown below Step 2: Run the Python Program As we see the query completed successfully. Step 3: Validate that the query was handled by the ‘tpcds_service’ user and not the ‘depta_user’ user by looking in the Dremio Job Logs screen as shown below. As we see above the query type is ODBCClient which is how we submitted the Sample Python query and in fact we see the tpcds_service account and not depta_user. Step 4: Validate that an authorized user cannot leverage the delegationUID In this continuation of this example, we will perform the same query using the ‘deptb_user’. In the above example we received the error message “Proxy user ‘deptb_user’ is not authorized to impersonate target user ‘tpcds_service’.” Ensuring that the Python program is not able to inappropriately hijak a delegation user.
https://www.dremio.com/tutorials/how-to-inbound-impersonation/
Home-based nursing care services have increased over the past decade. However, accountability and privacy issues as well as security concerns become more challenging during care provider visits. Because of the heterogeneous combination of mobile and stationary assistive medical care devices, conventional systems lack architectural consistency, which leads to inherent time delays and inaccuracies in sharing information. The goal of our study is to develop an architecture that meets the competing goals of accountability and privacy and enhances security in distributed home-based care systems. Methods We realized this by using a context-aware approach to manage access to remote data. Our architecture uses a public certification service for individuals, the Japanese Public Key Infrastructure and Health Informatics-PKI to identify and validate the attributes of medical personnel. Both PKI mechanisms are provided by using separate smart cards issued by the government. Results Context-awareness enables users to have appropriate data access in home-based nursing environments. Our architecture ensures that healthcare providers perform the needed home care services by accessing patient data online and recording transactions. Conclusions The proposed method aims to enhance healthcare data access and secure information delivery to preserve user's privacy. We implemented a prototype system and confirmed its feasibility by experimental evaluation. Our research can contribute to reducing patient neglect and wrongful treatment, and thus reduce health insurance costs by ensuring correct insurance claims. Our study can provide a baseline towards building distinctive intelligent treatment options to clinicians and serve as a model for home-based nursing care. I. Introduction The Japanese government is aiming to reduce the burden of medical health insurance costs as the number of elderly patients with chronic diseases, such as cardiovascular problems, diabetes, and dementia, has been increasing rapidly . These chronic diseases should be managed by promoting efficiency in home-based care . This can be an alternative to increasing hospital-based medical care for senior citizens living alone . The Japanese Ministry of Health, Labour, and Welfare issued guidelines for the secure management of medical information network systems . It includes mobile access but does not mention home-based nursing care environments. As such, it remains an ambiguous topic for home-based nursing care. It is becoming common for home care nurses to use smart devices and IoT connected equipment to monitor patient vital signs . This interconnection results in a myriad of types of sensitive patient information, which are prone to data breaches . It is a challenge to balance the competing goals of efficiency against security through trustworthy transactions and accessibility. In this paper, we present a healthcare information sharing system for home-based nursing care environments. Our system uses the Japanese Public Key Infrastructure (JPKI) embedded in the Japanese National Individual Card socalled ‘My Number Card’. The Japanese government announced that health insurance validation will also use the JPKI platform starting in 2020 . As of July 2018, 14 million cards have already been issued with the JPKI platform . Primary home-based care provides a level of independence at home and improves elderly quality of life. The use of smart devices and applications with body sensors has increased . Cloud computing technologies and context-aware systems have been used for ambient assisted living and remote healthcare . Context-aware information monitoring is a key to home-based nursing care systems because it covers the situational context of the accumulated data and provides real-time personalized healthcare services suited to user needs . Context awareness is widely used in modern big data analytics . Sensors generate large amounts of data; however, they lack the processing power to perform essential monitoring and secure data transmission. The Japanese Public Key (JPKI) in the ‘My Number Card’ is used to verify the card owner's identity on the internet. The government is also considering issuing a second JPKI certificate for a user's mobile device which links a phone to its owner. Our system also uses the Healthcare PKI (HPKI) to identify and validate the status of medical personnel through the hcRole included in the HPKI certificate. The hcRole attribute represents one of the twenty-four types of healthcare and welfare qualifications or one of the five types of administrative functions so far. The HPKI already is a global standard, ISO 17090, which defines the standard for Health Informatics-PKI . From interviews with nurses, we gained insight into the specific challenges to security and privacy in home-based nursing care environments. We briefly discuss each of these concerns below. (1) Leakage of private information: Privacy leakage is a critical issue in home-based nursing environments . This may hinder the processing and exchange of health data for diagnosis and treatment. (2) Unintentional errors and malicious attacks : Based on interviews, we realized that physicians might often forget to log out of terminals or smart equipment in a patient's home, which leads to the risk of unauthorized parties gaining access to sensitive health information. (3) Health data access control: IoT devices, such as wearables and sensors, using cloud services offer numerous advantages, including significant storage and computation capabilities. However, managing access control on these devices is challenging. Moreover, outsourcing third-party services would also introduce security concerns . The objective of this study is to develop an architecture that achieves the competing goals of providing accountability and privacy and enhancing security in distributed home-based care systems. To accomplish our purpose, we designed an architecture that improves the reliability of data exchange between healthcare personnel. To generate a trustworthy source of visit records, we use a system that supplies concrete evidence that healthcare personnel visited a patient's residence. Our system provides a security layer that supports accountability by leveraging context-aware services so that it can react intelligently according to the physical and logical environment. II. Methods Doctors and medical care providers (often nurses or licenced caregivers) take risks rooted in data transmission and access to sensitive medical data across institutions in a distributed home-based healthcare environment (Figure 1). To address this situation, we propose a home-based information-sharing architecture (Figure 2) in which context awareness is introduced in the system. 1. Privacy-Enhancing Trust Levels User authentication is the first step in protecting sensitive medical data. We employ the state-run JPKI to serve as a trusted mechanism with high reliability in our system. The second step is user authorization, which employs user access roles based on the HPKI. Thus, we leverage the JPKI and HPKI robust security features. The third step entails data resource processing, which is managed by a context-aware system built on a gateway to enhance privacy protection and data security as depicted in Figure 3. The gateway securely connects a patient's home to a remote service provider through the virtual private network (VPN). To accomplish the steps correctly, all the users have the JPKI and medical personnel the HPKI certificates. The mobile devices used in this system have to be linked to registered medical staff members. This is done by installing a PKI certificate in the trust execution environment of the devices. The primary function of registration allows the system to associate a device with its owner's identity. It also ensures that the gateway node is linked to a secure near-field communication (NFC) tag with the JPKI holder's account. The government is considering issuing JPKI certificates for mobile device owners soon. 2. Context-Aware Management When healthcare information service is accessed from home-based environments, user access is provided by a context-aware system that constrains access to services. The system uses context resources and data to provide relevant information and services to the users (medical personnel). The context-aware middleware comprises two modules: the client context manager (client CM) and the gateway context manager (gateway CM). The client CM comes wrapped in the Android application. It acquires and processes data from sensory networks in the user's domain and transmits it to the gateway CM. The gateway CM processes context information based on pre-defined context policies for each user as described later in this section. It also handles alert management and context instances for each session. The process is summarized here and is presented in Figure 4, where the numbering corresponds to the flow. · Resource access request (1): A user (U) sends an access request based on context policy to authenticate to the service provider. If the user can meet the context policy requirements, then he or she can obtain a session from the session manager. · Attribute and credential authentication (2)–(3): Otherwise, user attributes need to be authenticated and validated with the service provider access control as described later regarding the general authentication flow (see Section III-3). The system requests user credential verification based on the privacy-enhancing trust levels, in this case using the PKI card. Policy-enforcement module definitions are based on the hcRole coded into the HPKI cardholder's policy settings. The remote authentication service issues authenticated attribute credentials based on the hcRole and authorization from patient credentials. · Assigning client sessions (4): The access manager (AM) maintains each session by assigning a temporary client session token. It assign tokens to the client CM for each user session-based access rights until ended by users or the policy-enforcement module. · Context processing (5)–(6): It enforces policies based on information supplied by the context provider module, which uses metrics to establish the device's context types. Context collection is determined by the access policy. Context type may vary depending on its source, from a verified user's device context providers. The collected context is time stamped and digitally signed in to identify the context source. · Context validation (7): The gateway node confirms and validates pre-defined context and runtime contextual information gathered from the user's smartphone. The context verifier (CV) accumulates all the contextual data to confirm whether the context satisfies the conditions of the policy. The CV sends an encoded time-stamp and challenge response to the context provider (CP) after validation. Thus, engineers examining the system security requirements can appropriately decide which context types will be used to define access. · Policy validation (8): The policy verifier processes the predefined context policies provided by the policy decision point (PDP). The context data filter characterizes a policy enforcement point (PEP). It evaluates context data from the CP against the compiled policy rules, and the calls to request services are executed as user actions. Actions may include downloading files, updating healthcare information, and accessing limited data resources from other care providers. Context-aware policies are applied at user-end points of the client CM. These policies are maintained for every active user session to enable or reject access for a user to specific healthcare information resources. Further steps involve ordinary authorization, which is described in Section III-3 in relation to trust-level credential authentication and authorization. The corresponding context policy is based on Figure 5. For example, we will consider a policy that ensures that nursing care staff receive notifications on a mobile app to validate the current session associated with a particular patient. Whenever a change in location occurs based on the predefined context type (Table 1), the policy makes each information session end by logging the user off. The status of an information session changes with every variation in context metrics. For example, the policy prevents a caregiver leaving a residence and forgetting to sign out of active sessions. Thus, exposure of vital medical information or inconsistent logging of access control data can be avoided. 3. Trust-Level Credential Authentication and Authorization To initiate any transaction through the gateway, users must also establish a secure connection between the mobile device and the gateway. Figure 6 presents the authentication and authorization process between mobile devices in our system using context management. This process validates credential information provided by the user (U) against the repository of enrolled users. This process is based on authorization information, such as a user's identity, services being requested, and the attributes of the requester. If all checks are passed, an authentication token is issued, and the trust-level credential used (HPKI/JPKI) is digitally signed. Upon establishing a connection via the gateway, the CP in the client application (Section III-2) requests that context information be synchronized through the trusted gateway node. Once verification is securely completed, the healthcare provider has permission to access medical data and records. However, the broadness of their access is determined by the attributes of their assigned hcRole and the authorization based on the patient JPKI. The patient presents his or her smart card and enters the smart card personal identification number (PIN) code to sign the authentication challenge. If deemed necessary in special cases, the secure internal PIN-less authentication mechanism, which was introduced for JPKI cards, allows cardholders (here, patients or next of kin) to execute the authentication process that is employed in secure IC card transactions . By executing the authentication process, the patient allows medical personnel to access the relevant medical information and verifies the status of his or her health insurance. III. Results 1. System Prototype A prototype of the system was deployed for testing on a local network with an Android smartphone as the medical personnel terminal, and a Raspberry Pi as the gateway node. Figure 7 depicts a screenshot of a ‘request for action’ notification. The trust-authentication services were emulated by Linux machines functioning as the CA's of HPKI and JPKI separately besides the health insurance agency. Moreover, data center repositories were established on the same machines. Context-aware middleware modules were implemented in both the client devices and the gateway node. In the prototype, client applications were developed for an Android platform, which gave physicians, nurses, or medical caregivers remote access to the healthcare information system. The patient platform was implemented with the gateway using a Raspberry Pi, which was connected to the Internet and the patient's TV via a set-top box. This smart TV-like solution is intended to be usable by the elderly who may not be adept with technology. This also supports the concept of a personal electronic health journal (health diary) to enable patients to communicate and interact with doctors, nurses, and caregivers. 2. Prototype Analysis We evaluated the prototype system with the following aims: (1) to investigate whether the system satisfies the policy of the access control and privacy enhancement which permits access from authorized users and restricts unauthorized users; (2) to check whether the actual surrounding environment context, such as the geo-location, through the mobile devices is trustworthy and then the context-aware decision is accurately made in collaboration with the PEP; (3) to examine how the context-aware manager processes the PEP in case of policy changes by the policy administration point through the PDP; and (4) to simulate a situation using the two PKI cards (HPKI and JPKI) in our experimental setting to predict real situations to improve usability. Tests were conducted in a closed environment with six users, including two patients, a registered nurse who was also a licenced home care giver, and general care staff. The results will not wholly reflect those of an actual working environment. The system functionality may require specific modification to suit organizational privacy policies and the legal requirements set by the governing authority. Processing times for provider authentication and patient authorization need to be shorter. In our system, the average time for the client-gateway association is nearly 6.42 seconds, including user's (patient and care provider) PIN inputs and the initial NFC tag handover to the Wi-Fi connection. In actual situations, this may take longer but not intolerable lengths of time because user PIN inputs take time and remote services perform the authentication and authorization. The latency of NFC tag read-write is about 5.34 seconds during the initial connection of the mobile device to the nearest gateway. Using the NFC tag enhances the workflow of users and integrates it into a seamless access control process. It also helps improve user interaction by eliminating user input tasks. IV. Discussion We developed a context-aware architecture for supporting nursing care providers in home-based nursing environments. We use a secure NFC authentication mechanism that implements a secure channel by encrypting sensitive context data during transmission in the network. This architecture conducts authentication and authorization to access a specified patient's data using a context-aware gateway node. Thus, when a nurse or doctor visits the wrong patient's home, it rejects authorization based on the context sources of the geo-location, such as GPS data and Wi-Fi access point. Even where contextual information for the client is being spoofed or is misleading, the context verifier periodically justifies contextual sources and performs validation by comparing context data from the gateway and the mobile devices of the medical care giver. If the client's location varies significantly from the gateway in use, it requests the access manager to end the session by automatically logging off. By utilizing the JPKI and HPKI mechanisms, the identities of patients and medical personnel can be confirmed, satisfying the government requirement regarding security and privacy protection. Patients' use of their JPKI cards expresses explicit permission to access their data. This also serves as the granting of informed consent before medical procedures. Patient privacy is guaranteed by allowing access to information only by medical personnel with authorized roles according to the HPKI hcRole. Lee et al. proposed a service-oriented framework for remote medical services focused on the IoT environment using a similar method of context awareness. However, our research systematically approaches the introduction of context awareness in home-based nursing care environments. Some preliminary work was carried out in which the HPKI was introduced for access control of web-based clinical information in combination with the policy control of personal computers . Users are authenticated using device IDs and passwords. Although this method is practical, our approach is stronger and is expected to resist common attacks, such as user impersonation password guessing and stolen verifier attacks. In our research, we introduced context-awareness in home-based nursing environments. The proposed system generates electronic evidence of a medical visit to a patient's home through authentication and authorization using the JPKI and the HPKI, respectively. The authentication process employs digital signatures for all patients and healthcare professionals, which is essential during insurance claims verification. Our approach ensures on-the-spot authorization through the patient's consent to access their data, while maintaining the patient's privacy. In our system design, we focused on the user-friendly validation of the context without the leaking of sensitive user information. Our proposed approach complies with the Japanese Act on Protection of Personal Information and the Health Insurance Portability and Accountability Act (HIPPA). This approach makes the system extensible to other global regulatory requirements for home-based nursing care systems. Our experimental results show that the authentication and authorization architecture is feasible to deploy in IoT network environments. However, the main limitation of our work is that the architecture is based on the premise that patients can use the JPKI and medical personnel the HPKI. The JPKI can be replaced with PKI-based ID cards. The HPKI is already standardized by the ISO, but it is only available in Japan so far. To make our system user-friendly through BYOD, we apply a method that filters malformed context information from user's devices. The home gateway serves as a trusted source for gathering information for monitoring a patient's vital signs through wearable devices. Our study can provide a baseline towards building distinctive intelligent treatment options to clinicians and serve as a model for home-based nursing care. We believe that our research will contribute to reducing patient neglect and wrongful treatment. It also can reduces health insurance costs by ensuring correct insurance claims. We are confident that the proposed system will enhance patient and medical care provider privacy by enforcing access control. Notes Conflict of Interest: No potential conflict of interest relevant to this article was reported.
https://e-hir.org/journal/view.php?number=995&viewtype=pubreader
Dill works best as the background note in many popular dishes. When used judiciously, it can complement the flavors in many meat and vegetable dishes. Too much dill has the opposite effect and can make dishes bitter and downright unpalatable. As with most flavorful ingredients, there are fixes if you use too much of it. Below are several ways that you can rescue a dish if you have added more dill to it than you really needed. Dilution is one of the simplest and most intuitive ways to tone down a flavor that is too strong in a dish. If you have used too much dill in a stew or sauce, simply add more of the ingredients minus the dill. By increasing the proportions of ingredients that are not dill, you restore a balance between all the flavors. Note that the extent to which this works depends on how far overboard you went with the dill and whether you have enough of the other ingredients left over to double or even triple the recipe. In some situations, dilution may not be the most viable option. If that is the case, consider one of the other solutions below. One of dill’s drawbacks may actually be a benefit if you have used too much of it. That drawback is the fact that it does not have a long-lasting flavor. As a relatively delicate herb, its flavor will die down quickly rather than intensify the way that some other herbs might. If the dish can withstand a longer cooking time, simply cook it for a little longer. The extra cooking may not even be necessary. Some people have found that by letting the dish sit in the refrigerator overnight, the overly strong dill flavor dissipated and made the dish palatable again. Potatoes are good at absorbing flavors and may work in many of the dishes that typically require dill. Potato is recommended for dealing with too-pungent flavors from many herbs and spices including cayenne pepper and thyme. Consider one of the more traditional uses of dill—the eastern European soup known as borscht. Potatoes show up in some iterations of borscht, which means that you can neutralize the extra dill flavor without sacrificing the authenticity of the dish. If you would rather not serve potato in the dish, simply remove it before serving. While this may not be possible with finely chopped dill, it can be effective if your dish requires you to use whole stems of the herb. Whole stems are actually the preferred way to use dill in many dill pickle recipes. In this case, you will be able to limit the extent to which it flavors your dish by pulling the stems out before they release all of their flavor. With dill pickles, your best bet is simply to make a new batch of brine without as much dill. Soak the dill pickles in clean water before placing them in the new brine. Like potatoes, acidic ingredients are great for counteracting some herbal flavors that are too concentrated. The acid in vinegar helps to mask the herbaceous celery-like notes of dill. In a potato salad or similar application, a little extra vinegar will help to mute the dill flavor while improving the overall flavor profile of the dish.
https://www.spiceography.com/too-much-dill/
Is Cuisine Still Italian Even if the Chef Isn’t? asks the New York Times. Is pasta still Italian, even if it came from China? I have to say, I find this discussion nearly baffling. It's not surprising, of course, but it baffles me nonetheless. I suppose you'd have to embrace a Like-Water-For-Chocolate type belief that some of the creator's soul ends up in the dish to believe that the ethnic or racial or cultural background of someone preparing a dish fundamentally changes the dish's nature - or has any appreciable effect at all. Yes, a chef could incorporate flavors from his/her experience and upbrining - but that's different than asking whether a non-Italian making a strictly Italian dish with Italian ingrediants and flavors can somehow not do it because he's not Italian. All in all, the linked article is really just more NYT fluff that sounds more insightful (inciteful?) than it really is, since the answer seems to be, uh, yeah, lots of people in kitchens are recent immigrants, to whatever country, from whatever country. In my mind, though, because I'm a nerd like this, it gets back to the question of who can represent whom. Or what. Our notions of authenticity come out in the strangest ways.
http://www.phoblographer.com/2008/04/its-not-racism-but-culture.html
America. As well as Chinese cooking methods that excel over cooking methods in Africa, Europe, and America. If we look at the progress of Chinese civilization in the modern era, we realize that all aspects tend to advance and develop. Widespread food is a measure of intellectual, cultural, and civilizational progress alike. The development of gastronomy as an important component of the food culture and the great progress that food culture witnessed, was indicative of the economic and cultural development achieved by the Chinese nation. And this is a clear result of the progress of Chinese society. principles that made Chinese food spread around the world - The Chinese eat a lot of fruits and vegetables. As they eat nearly twice the amount of fruits and vegetables that the West eats. - Also, a lot of exotic vegetables are used. Often you may not have heard before about many Chinese vegetables and fruits such as pomelo, Bitter melon, huge potatoes, tree fungi, and many other plants. - The Chinese do not eat with traditional cutlery, as they eat food with sticks. Since it is difficult to cut food using chopsticks, all foods served should be soft or cut into small pieces before cooking. This was according to Confucius’ teachings calling for peace and love, and therefore it was forbidden for them to use a fork and knife while eating. Some of the popular dishes are multi-ingredient And if the beholder can gaze at the advancement of a people by looking at their actions. As soon as you see the dishes of popular China you believe that their actions were the result of the advancement of their ideas. And here are some examples of their traditional dishes full of various ingredients: - Ma Po Tofu Mapo tofu is a popular Chinese dish from Sichuan Province. Where spicy food is the predominant and the distinctive spice of the region – Sichuan pepper – gives the dishes a unique effect. It’s almost like having Sichuan pepper not only to add aroma and flavor but also to numb your tongue so that it absorbs more heat. It is clear that their use of this large amount of hot pepper did not appear until after they knew about the benefits of this variety. And perhaps the most important benefit is that it prevents blood clots. Which helps expand blood vessels so that blood flows properly in the body. And reduces the risk of blood clots and many more Of the heart problems Tofu, a vegan cheese made from rich soy milk, with ground red-brown meat and chopped green onions, gives the dish an unparalleled flavor. - Kung Pao Chicken Kung Pao Chicken is a Chinese classic with spicy chicken, peanuts, and vegetables in our delicious kung pao sauce. This easy, homemade recipe is healthy, low in calories, and so much better than eating out. The dish has also made its way outside of China. And is still a common sight on Chinese fast food menus in countries around the world. There are good reasons why everyone loves kung pao chicken. It has many flavors: refreshing, sweet, and salty with a hint of heat. Art puts the right amount from each ingredient to come up with the winning flavor combination. - Peking Roasted Duck Peking duck is a dish from Peking that has been prepared since the imperial period. The meat is characterized by its thin crust and crunchy. With original versions of the dish mostly served with the skin and a touch of meat, and it’s sliced ahead of diners. Ducks specially bred for the plate are slaughtered after 65 days and roasted before roasting during a closed or suspended oven. Chinese are usually eat the meat with green onions, cucumber, and sweet bean sauce with pancakes wrapped around the fillings. Sometimes pickled radish is additionally inside, and other sauces (such as hoisin sauce) are often used. Chinese food therapy A diet that is rooted in Chinese concepts and how to effect food has on human organs. It focuses on concepts such as eating in moderation And as we see Chinese food is known for its including vegetables and protein. As it has two main components: vegetables and meat or fish, in addition to rice or noodles. Chinese cuisine is mainly based on the use of vegetables in all types of food. Vegetables are not cooked until fully ripened, but left completely immature to preserve its nutritional importance. And its unique fresh taste. In fact, it is not the taste that is the reason for the variety of ingredients. But rather the benefits of tastes that necessitate the diversity of ingredients and tastes to increase the benefits. The Chinese divide the taste into five different types: sour, bitter, sweet, spicy, and salty. But for the Chinese, they are more than just flavors. In traditional Chinese medicine, every chew of food is sent to the relevant devices: - Sour food enters the liver to help stop sweating and relieve coughing. - Salt enters the kidneys, so it can dry, cleanse and soften tissues - As for the bitter food, it enters the heart and small intestine and helps reduce heat and dry out any moisture. - As for spicy food, it enters the lungs and large intestine and helps stimulate the appetite. - In return, sweet-tasting food enters the stomach and spleen and helps lubricate the body. This is why it is so important that any diet includes all kinds of flavors or tastes. Does this mean that in order to be healthy you should eat food that is neutral and includes all kinds of flavors or just flavors? The answer is No. Food choices are affected in a way that is proportional to the structure of your body, as well as by the season and the location in which you are. As well as by age and gender (male or female). In other words, traditional Chinese medicine practitioners adapt their recommendations to a variety of circumstances.
https://foodiesuite.com/chinese-food/
Mulligatawny is a delicious and aromatic soup with a rich history and cultural significance in Indian cuisine. The name “Mulligatawny” is derived from the Tamil words “milagu,” meaning “pepper,” and “tannir,” meaning “water,” which is fitting as the soup is known for its spicy, pepper-based broth. Mulligatawny Soup Recipe The recipe for Mulligatawny soup is versatile and can be easily customized to suit different dietary needs. It can prepare vegan and gluten-free food using coconut milk instead of cream, gluten-free flour, or cornstarch to thicken the soup. The recipe can also be adjusted to suit different levels of spiciness by adjusting the amount of pepper used. The recipe for Mulligatawny soup typically starts with sautéing onions, ginger, and garlic in oil or ghee. Then, add various spices such as cumin, coriander, and turmeric to the mixture. Next, add vegetables such as carrots, celery, tomatoes, and lentils or protein such as chicken or lamb. The soup is then simmered until the vegetables and lentils are tender and the flavors have melded together. Finally, the soup is thickened with flour or cornstarch and finished with cream or coconut milk for added richness and creaminess. Mulligatawny soup is a delicious and comforting dish perfect for a cold winter evening. It is also a great way to enjoy the flavors of Indian cuisine without feeling too heavy. The soup pairs well with different types of bread or rice and can garnish with fresh herbs, toasted nuts, or yogurt for added flavor and texture. History Of Mulligatawny Soup The history of Mulligatawny soup can trace back to the British colonial period in India. British colonial officers discovered the delicious pepper broth and brought it back to England, where it quickly became a popular dish. However, the original recipe was a simple broth made with pepper and lentils and adapted to the British palate. Over time, it evolved to include more ingredients and flavors from Mughlai cuisine, a cooking style that developed in the medieval Mughal Empire in India. Today, we know Mulligatawny soup for its rich and complex flavors, and it is a staple in Indian and British cuisine. Health Benefits Of Mulligatawny Soup Mulligatawny soup is a nutritious and healthy dish with various health benefits. Some of the main nutritional benefits of Mulligatawny soup include: High in protein: Mulligatawny soup often prepares with lentils, chicken, or lamb, which are all good protein sources. Protein is essential for maintaining and repairing muscle tissue and helps keep you full and satisfied. Rich in vitamins and minerals: Mulligatawny soup typically makes with various vegetables such as carrots, onions, celery, and tomatoes, which are rich in vitamins and minerals like Vitamin A, Vitamin C, potassium, and fiber. Low in fat: Mulligatawny soup often prepares with broth or coconut milk, both low in fat. This makes it a great choice for people trying to watch their fat intake. Good for digestion: Mulligatawny soup makes with a variety of spices, such as cumin, coriander, and turmeric. These can aid in digestion and reduce inflammation in the gut. Gluten-free and vegan options: Mulligatawny soup can easily make vegan and gluten-free by using coconut milk instead of cream and gluten-free flour or cornstarch to thicken the soup. Good source of antioxidants: Its ingredients, such as Turmeric, ginger, and garlic, are rich in antioxidants. It helps fight against free radicals and prevent chronic diseases. It’s important to note that the nutritional benefits of this soup can vary depending on the ingredients and preparation. To maximize the nutritional benefits, it’s best to use fresh, whole ingredients and to avoid adding too much fat or sodium. The Final Words In short, Mulligatawny Mughlai is a must-try soup for anyone who loves Indian cuisine. Its rich history and diverse flavors make it a perfect blend of spices, protein, and creaminess. It’s a perfect comfort food that you can enjoy anytime. Visit Urban Village Grill and enjoy Mulligatawny soup with your friends and family.
https://www.urbanvillagegrill.com/seasonal-menu/mulligatawny-mughlai-the-delicious-soup-you-shouldnt-miss/
The list below are books in my cooking and baking library. It’s an extensive resource which allows me to compare recipes and methods from some of the world’s most renowned chefs and culinary institutions. What I have found interesting or concluded is that, once a few basics are learned, (making a roux, the basic processes of sautéing, broiling or roasting, etc.), the only value of a recipe is its flavor. Claims of “authenticity” are meaningless. Whether a dish of “original” Bolognese sauce contains red or white wine, milk or cream, or a Hungarian Goulash is made with beef, pork or lamb, with or without tomatoes or potatoes, is irrelevant. The bottom line is all about flavor. Recipes change through time and by region. For example, Wiener Schnitzel is commonly thought of as a German creation. Actually, it is an Austrian dish made with veal, but has its roots in Italy. Austrians will be the first to admit that Wiener Schnitzel doesn’t come from Vienna. Interestingly, Schnitzel (the technique of breading and frying thin cutlets of meat) appears to have originated and is attributed to the Romans around 1 BC. This traditional German Schnitzel is prepared the same way as Austrian Schnitzel. The German Schnitzel is made with pork instead of veal and many recipe variations are offered. This is only one example of hundreds of primary dishes within countries and regions. Learn some basic methods and procedures for meals you enjoy. Add or subtract ingredients which appeal to you. The only thing that counts is enjoying the flavors you concoct! - 400 Soups by Anne Sheasby - Appetizers, Fingerfood, Buffets & Parties—Hermes House - Baking by Martha Day - Betty Crocker's Slow Cooker Cookbook - Cheese Making — Ricki Carrol - Coming Home to Sicily by Fabrizia Lanza - Culinaria—Germany by Christine Metzger - Essentials of Classic Italian Cooking by Marcella Hazan - Flour, Water, Salt, Yeast — Ken Forkish - French Women Don't Get Fat by Mireille Guiliano - Gourmet Burgers — Publications International, Ltd. - Herbs for the Home by Jekka McVicar - Hot Links and Country Flavors — Bruce Aidells and Denis Kelly - How to Cook Everything by Mark Bittman - Italianissimo — McRae Books - Main Courses-365 — Hermes House - Mastering Fermentation — Mary Karlin - Mastering Pasta: The Art & Practice of Handmade Pasta by Marc Vetri - Meat & Poultry by Lucy Knox and Keith Richmond - Pasta by Linda Fraser - Soup by Debra Mayhew - The All New Joy of Cooking by Irma S. Rombauer, Marion Rombauer Becker and Ethan Becker - The Art of Quick Breads by Beth Hensperger - The Best Ever 20 Minute Cookbook by Jenni Fleetwood - The Bread Baker's Apprentice by Peter Reinhart - The Ciao Bella Book of Gelato & Sorbetto by F.W. Pearce & Danilo Zecchin - The Complete Book of Sauces by Sallie Y. Williams - The Culinary Institute of America - The Elements of Pizza — Ken Forkish - The Essential Pasta Cookbook — Bay Books - The Fine Art of Italian Cooking by Giuliano Bugialli - The New Grilling Book — Better Homes & Gardens - The Perfect Scoop by David Lebovitz - The Pie and Pastry Bible by Rose Levy Beranbaum - The Science of Good Cooking ― Cook's - The Way to Cook by Julia Child Image may be subject to copyrights. Original artist unknown.
http://chefchristoph.com/index.php/2015-01-24-21-12-20/resources
All the flavor of Italy on the table When we think of Italian cuisine, the first thing that comes to mind is a pasta dish or a pizza. However, beyond these typical elaborations, in Italy they are enthusiastic of risottos, especially the inhabitants of the north of the country. The reason is that in the area of Veneto, Lombardy and throughout Piedmont, rice is grown, especially varieties rich in starch, essential when making a risotto so that the result is creamy but not broth. But beyond the rice, it is necessary to take into account the cheese, mainly Parmigiano Reggiano, although Pecorino or Grana Padano are also used. It is also important to note that for cooking rice we should not use water, but a good stock of vegetables or poultry. Once the rice is ready, you have to use a fundamental technique, mantecatto, which consists of adding cheese and butter without stopping to give it that dense consistency of a good Italian style risotto. Next to this dish, on our table there should be a crispy focaccia with a glass of cava and a glass of Acqua Panna. Buon profitto! Ingredients for 2 people For the risotto - 180 g of rice - 600 ml of vegetable broth - 80 g of blue cheese - 1 garlic clove - 1 small onion - 1 glass of dry white wine - Olive oil - Salt and pepper Also - 6 sheets of wonton pasta - Fresh arugula Step by step directions - In a frying pan with a drizzle of olive oil, sauté the onion and the clove of garlic, previously finely chopped, until they are transparent. - After about ten minutes, add the rice and stir for a couple of minutes. Add the wine and cook until it evaporates. - Add the hot broth little by little to the rice and stir to avoid sticking. Add salt and pepper. - Meanwhile, fry the wonton pasta slices until they are crisp. Drain them on absorbent paper. - Once the rice is cooked in approximately 15 minutes, remove from the heat and add the blue cheese and about 25 g of arugula. Stir until it is integrated. - In a soup dish, place a spoonful of rice, a fried dough and finish with rice. Decorate with fresh arugula leaves. ‘Harmonies in Flavors and Fragrances’. By Juan Muñoz Ramos. Two flavors mark this dish, the blue cheese and the rice, cheerful, yes, and the arugula that cleans and the wonton that offers a very pleasant texture in its crunch. We need some very concrete liquid friendships: on the one hand, a water that hydrates and respects the dish as a whole and, on the other, an acidity with balance thanks to a long aging and base wines. We talk about two great gastronomic friendships, a water like Acqua Panna -elegance and depth with a clean and direct flavor- and a Gran Cava like Cuvée DS Gran Reserva Brut. With these friendships at the table, life I assure you, has a better flavor. Tips - For the broth, put a pot with water and different vegetables over low heat to extract all the flavor of the vegetables. - If you don’t have vegetable broth, you can use chicken broth.
https://www.thegourmetjournal.com/english-version/blue-cheese-risotto/
Culture Tuesday is a weekly column in which Best of Vegan Editor Samantha Onyemenam explores different cultures’ cuisines across the globe through a plant-based and vegan lens. Before you start exploring vegan soul food with her today, you might want to click here to read her column about Kurdish cuisine, here to read her column about Levantine cuisine, and here to read her column about Uzbek cuisine. Culture Tuesday – Soul Food Soul food is a cuisine developed and enjoyed by African Americans, mainly in southern parts of the United States of America. It is a product of taking the worst of a situation and turning it into something great. More specifically, soul food was developed as a result of slavery. During the period when the enslavement of West Africans was still legal, enslaved people were typically given scraps or unwanted foods. These were foods or parts of foods that the slave masters, and their supporters, did not want or would otherwise discard. Amongst the more generally useable ingredients, the enslaved people, who often worked on plantations, also had access to a limited amount of cornmeal, sweet potatoes, and greens, such as collard greens, kale, and beet greens as well as a few other ingredients. Due to the intense physical workload and activities the enslaved people had to endure daily, high-calorie meals were important to sustain them. This led to cooking methods aimed at increasing the calorific value of dishes. Such methods include deep frying foods, mixing high-calorie ingredients with lower-calorie ones to bulk up the meal, breading foods, and using most or all of each ingredient regardless of if certain parts are considered to be less pleasurable to cook or eat from their appearance, or texture, prior to cooking. These include the stems and skins of some foods which are often discarded in a number of cuisines. Created as a result of slavery, soul food is a symbol of taking the worst of a situation and turning it into something great. In more modern times, soul food (or distinct elements of the cuisine) is generally associated with the South and is considered to be the familiar or preferred cuisine of non-Black people in the region. However, its roots remain linked to the enslavement of Black people. This is as a result of enslaved people with cooking skills being forced to cook meals for their slave masters and their companions, or guests and, later, after the emancipation of enslaved people, the African Americans being employed as cooks for prominent white figures, such as presidents, making soul food an even more popularly known and sought cuisine amongst non-Black people. (Please note that this does not mean that non-Black people are racist for consuming foods considered as soul food or seeing them, or elements of them, as part of their culture) Influences Soul food is greatly influenced by the general cuisine of West Africa as well as some influences from the Native Americans. As the original developers of soul food were abducted from West Africa, those who could cook arrived on Turtle Island (the United States of America) with skills and cooking styles associated with their cultures. These cooking styles were used in their processes of creating the dishes which make up soul food. As the food the people were making was based on ingredients accessible to them in their new location, the cuisine is also influenced by the general Native American cuisine. Methods of processing some of their ingredients were also adopted from the Native Americans. These include the various ways Native Americans process corn as well as some of their cooking methods for it. Soul food has more intense and complex flavors when compared to the more general Southern cuisine due to the Native American and West African influences on the former. The Native American influences brought about dishes such as cornbread, grits, boiled beans, hush puppies, and hoecakes. On the other hand, the West African influences brought about smoked dishes (also common in Native American cuisines) as well as spicy (hot) dishes, green leafy vegetable dense foods, rice dishes, stewed bean dishes, okra dishes, sweet potato dishes (as sweet potatoes are used similarly to how West Africans cook yams) and dishes flavored with nuts, aromatic root vegetables, herbs, spices, and/or flavorsome fats and oils giving them more intense and complex flavors when compared to the more general Southern cuisine. Vegan Soul Food Modern-day soul food is mostly not vegan despite the common vegetarian and vegan cultural eating habits of some of the enslaved people prior to arriving in the United States. Due to having to use animal products and by-products to increase the calories in meals, as well as to form a meal from the very limited food they had access to, the cuisine evolved into a rather non-vegan one. However, black people have created vegan soul food for health reasons as well as to stay in touch with their culture, be it religious culture, or what they perceive to be the original culinary culture of the parts of Africa their family originated from. Vegan soul food has been achieved through the use of meat substitutes such as tofu, tempeh, seitan, mushrooms, and even cauliflower. These are marinated or breaded similarly to the way those processes are carried out on meats. Vegans wanting to stay as true to the soul food cuisine as possible might go on to deep fry the food while those who are trying to eat as healthily as possible while still enjoying the foods they grew up on might opt to follow other cooking methods such as air frying, shallow frying or roasting. Typical Soul Food Meals Typical soul food meals consist of a range of sides, fried and/or roasted meat (substitutes), cornbread, and sweet desserts. The most common sides are macaroni and cheese, collard greens or kale, candied sweet potatoes, and a black-eyed pea or bean dish. The desserts often sweet potato pie, peach cobbler, banana pudding, or a pound cake. The least globally popular of these are cornbread and sweet potato pie. Cornbread is bread made from cornmeal. Traditionally, southern (soul food) cornbread is savory. However, African Americans in the northern regions of the United States (whose families migrated north after escaping slavery or being emancipated) tend to make sweeter cornbreads. It can be baked or fried and baked. When cooked using the latter method, the cornbread is made by pouring its batter into a cast-iron skillet containing hot oil then returning the skillet to the oven for the bread to bake. This results in a dense, moist, somewhat crumbly cornbread with a crunchy crust at the bottom. Variations of cornbread can be made with the addition of jalapeno slices or smoky meat substitutes. Sweet potato pie is an open pie (crustless top) made predominantly from pureed sweet potatoes. In the 18th century, sweet potato pie was considered to be a savory dish due to sweet potatoes being a root vegetable. However, by the 19th century, the sweet dish was classified as a dessert as opposed to a savory meal. Recipes – Vegan Mac and Cheese, Smoky Collard Greens and Sweet Potato Salad In this video, Jenné Claiborne (@sweetpotatosoul on Instagram) shares three of her veganised soul food dishes that are perfect delicious homecooked meals and meals you can make to entertain guests. Each recipe is easy to follow with clear instructions on how to make important elements, such as a creamy vegan cheese sauce, from scratch.
https://bestofvegan.com/culture-tuesday-vegan-soul-food/
This paper argues that to better understand what is required to meaningfully preserve digital information, we should attempt to create a foundation for the concept of the authenticity of informational entities that transcends the multiple disciplines in which this concept arises. Whenever informational entities are used and for whatever purpose, their suitability relies on their authenticity. Yet archivists, librarians, museum curators, historians, scholars, and researchers in various fields define authenticity in distinct, though often overlapping, ways. They combine legal, ethical, historical, and artistic perspectives such as the desire to provide accountability, the desire to ensure proper attribution, or the desire to recreate, contextualize, or interpret the original meaning, function, impact, effect, or aesthetic character of an artifact. Each discipline may have its own explicit definition of authenticity; however, in interdisciplinary discussions of authenticity, the dependence of a given definition on its discipline is often manifested only implicitly. The technological issues surrounding the preservation of digital informational entities interact with authenticity in novel and profound ways. We are far more likely to achieve meaningful insights into the implications of these interactions if we develop a unified, coherent, discipline-transcendent view of authenticity. Such a view would - improve communication across disciplines; - provide a better basis for understanding what preservation requirements are implied by the need for authenticity; and - facilitate the development of common preservation strategies that would work for as many different disciplines as possible and thereby effect technological economies of scale. Developing a preservation strategy that economically transcends disciplines would free preservationists from the need for discipline-specific definitions of authenticity. In this paper, I will suggest that there is at least one preservation strategy, based on the notion of a digital-original, that makes the details of how we define authenticity all but irrelevant from the perspective of preservation. However, to derive this conclusion, it is necessary to examine authenticity in some depth. Although a discipline-transcendent view of authenticity would be the ideal, it may turn out to be impractical. If so, we may need to settle for a multidisciplinary perspective. This means establishing either a unified concept of authenticity as it is used in a subgroup of disciplines (such as archives, libraries, and museums) or a set of variant concepts of authenticity, each of which addresses the specific needs of a different discipline yet retains as much in common with the other concepts as possible. Basic Definitions The term informational entity, as used here, refers to an entity whose purpose or role is informational. By definition, any informational entity is entirely characterized by information, which may include contextual and descriptive information as well as the core entity. Examples of informational entities include digital books, records, multimedia objects, Web pages, e-mail messages, audio or video material, and works of art, whether they are “born digital” or digitized from analog forms. It is not easy for computer scientists to agree on a definition of the word digital.1 In the current context, it generally denotes any means of representing sequences of discrete symbolic values-each value having two or more unambiguously distinguishable states-so that these sequences can, at least in principle, be accessed, manipulated, copied, stored, and transmitted entirely by mechanical means with a high degree of reliability (Rothenberg 1999). Digital informational entities are defined in the next section. The term authenticity is even harder to define, but the term is used here in its broadest sense. Its meaning is not restricted to authentication, as in verifying authorship, but is intended to include issues of integrity, completeness, correctness, validity, faithfulness to an original, meaningfulness, and suitability for an intended purpose. I leave to specialists in various scholarly disciplines the task of elaborating the dimensions of authenticity in those disciplines. The focus of this paper is the interplay between those dimensions and the technological issues involved in preserving digital informational entities. The dimensions of authenticity have a profound effect on the technical requirements of any preservation scheme, digital or otherwise. The remainder of this paper discusses the importance of understanding authenticity as a prerequisite to defining meaningful digital preservation. Digital Informational Entities are Executable Programs The distinguishing characteristic of a digital informational entity is that it is essentially a program that must be interpreted to be made intelligible to a human: it cannot simply be held up to the light to be read. A program is a sequence of commands in a formal language that is intended to be read by an interpreter that understands that language.2 An interpreter is a process that knows how to perform the commands specified in the formal language in which the program is written. Even a simple text document consisting of a stream of ASCII character codes is a program, i.e., it is a sequence of commands in a formal language (each command specifying a character to be rendered) that must be interpreted before it can be read by a human. More elaborate digital formats, such as distributed, hypermedia documents, may-in addition to requiring interpretation for navigation and rendering-embed macros, scripts, animation processes, or other active components, any of which may require arbitrarily complex interpretation. Some programs are interpreted directly by hardware (for example, a printer may render ASCII characters from their codes), but the interpreters of most digital informational entities are software (i.e., application programs). Any software interpreter must itself be interpreted by another hardware or software interpreter, but any sequence of software interpretations must ultimately result in some lowest level (“machine language”) expression that is interpreted (“executed”) by hardware. It follows that it is not sufficient to save the bit stream of a digital informational entity without also saving the intended interpreter of that bit stream. Doing so would be analogous to saving hieroglyphics without saving a Rosetta Stone.3 In light of this discussion, it is useful to define a digital informational entity as consisting of a single, composite bit stream4 that includes the following: - the bit stream representing the core content of the entity (that is, the encoding of a document, data, or a record), including all structural information required to constitute the entity from its various components, wherever and however they may be represented; - component bit streams representing all necessary contextual or ancillary information or metadata needed to make the entity meaningful and usable; and - one or more component bit streams representing a perpetually executable interpreter capable of rendering the core content of the entity from its bit stream, in the manner intended.5 If we define a digital informational entity in this way, as including both any necessary contextual information and any required interpreter, we can see that preserving such an entity requires preserving all of these components.6 Given this definition, one of the key technical issues in preserving digital informational entities becomes how to devise mechanisms for ensuring that interpreters can be made perpetually executable. Preservation Implies Meaningful Usability The relationship between digital preservation and authenticity stems from the fact that meaningful preservation implies the usability of that which is preserved. That is, the goal of preservation is to allow future users to retrieve, access, decipher, view, interpret, understand, and experience documents, data, and records in meaningful and valid (that is, authentic) ways. An informational entity that is “preserved” without being usable in a meaningful and valid way has not been meaningfully preserved, i.e., has not been preserved at all. As a growing proportion of the informational entities that we create and use become digital, it has become increasingly clear that we do not have effective mechanisms for preserving digital entities. As I have summarized this problem elsewhere: “There is as yet no viable long-term strategy to ensure that digital information will be readable in the future. Digital documents are vulnerable to loss via the decay and obsolescence of the media on which they are stored, and they become inaccessible and unreadable when the software needed to interpret them, or the hardware on which that software runs, becomes obsolete and is lost” (Rothenberg 1999). The difficulty of defining a viable digital preservation strategy is partly the result of our failing to understand and appreciate the authenticity issues surrounding digital informational entities and the implications of these issues for potential technical solutions to the digital preservation problem. The following argues that the impact of authenticity on preservation is manifested in terms of usability, namely that a preserved informational entity can serve its intended or required uses if and only if it is preserved authentically. For traditional, analog informational entities, the connection between preservation and usability is obvious. If a paper document is “preserved” in such a way that the ink on its pages fades into illegibility, it probably has not been meaningfully preserved. Yet even in the traditional realm, it is at least implicitly recognized that informational entities have a number of distinct attributes that may be preserved differentially. For example, stone tablets bearing hieroglyphics that were physically preserved before the discovery of the Rosetta Stone were nevertheless unreadable because the ability to read the language of their text had been lost. Similarly, although the original Declaration of Independence has been preserved, most of its signatures have faded into illegibility. Many statues, frescos, tapestries, illuminated manuscripts, and similar works are preserved except for the fact that their pigments have faded, often beyond recognition. Although it is not always possible to fully preserve an informational entity, it may be worth preserving whichever attributes can be preserved if doing so enables the entity to be used in a meaningful way. In other words, if preserving certain attributes of an informational entity may allow it to fulfill some desired future use, then we are likely to consider those attributes worth preserving and to consider that we have at least partially preserved the entity by preserving those attributes. Generalizing from this, the meaningful preservation of any informational entity is ultimately defined in terms of which of its attributes can and must be preserved to ensure that it will fulfill its future use, whether originally intended, subsequently expected, or unanticipated. Deciding which attributes of traditional informational entities to preserve involves little discretion. Because a traditional informational entity is a physical artifact, saving it in its entirety preserves (to the extent possible) all aspects of the entity that are inherent in its physical being, which is to say all of its attributes. Decisions may still have to be made, for example, about what technological measures should be used to attempt to preserve attributes such as color. For the most part, however, saving any aspect of a traditional information entity saves every aspect, because all of its aspects are embodied in its physicality. For digital informational entities, the situation is quite different. There is no accepted definition of digital preservation that ensures saving all aspects of such entities. By choosing a particular digital preservation method, we determine which aspects of such entities will be preserved and which ones will be sacrificed. We can save the physical artifact that corresponds to a traditional informational entity in its entirety; however, there is no equivalent option for a digital entity.7 The choice of any particular digital preservation technology therefore has inescapable implications for what will and will not be preserved. In the digital case (so far, at least), we must choose what to lose (Rothenberg and Bikson 1999). This situation is complicated by the fact that we currently have no definitive taxonomies either of the attributes of digital informational entities or of the uses to which they may be put in the future. Traditional informational entities have been around long enough that we can feel some confidence in understanding their attributes, as well as the ways in which we use them. Anyone who claims to have a corresponding understanding of digital informational entities lacks imagination. Society has barely begun to tap the rich lode of digital capabilities waiting to be mined. The attributes of future digital informational entities, the functional capabilities that these attributes will enable, and the uses to which they may be put defy prediction. Strategies for Defining Authenticity It is instructive to consider several strategies that can be used to define authenticity. Each strategy may lead to a number of different ways of defining the concept and may, in turn, involve a number of alternative tactics that enable its implementation. One strategy is to focus on the originality of an informational entity, that is, on whether it is unaltered from its original state. This strategy works reasonably well for traditional, physical informational entities but is problematic for digital informational entities. The originality strategy can be implemented by means of several tactics. One such tactic is to focus on the intrinsic properties of an informational entity by providing criteria for whether each property is present in its proper, original form. For example, one can demand that the paper and ink of a traditional document be original and devise chemical, radiological, or other tests of these physical properties.8 A second tactic for implementing the originality strategy is to focus on the process by which an entity is saved, relying on its provenance or history of custodianship to warrant that the entity has not been modified, replaced, or corrupted and must therefore be original. For example, from an archival perspective, a record is an informational artifact that provides evidence of some event or decision that was performed as part of the function of some organization or agency. The form and content of the record convey this evidence, but the legitimacy of the evidence rests on being able to prove that the record is what it purports to be and has not been altered or corrupted in such a way as to invalidate its evidential meaning. The archival principle of provenance seeks to establish the authenticity of archival records by providing evidence of their origin, authorship, and context of generation, and then by proving that the records have been maintained by an unbroken chain of custodianship in which they have not been corrupted. Relying on this tactic to ensure the authenticity of records involves two conditions: first, that an unbroken chain of custodianship has been maintained; and second, that no inappropriate modifications have been made to the records during that custodianship. The first of these conditions is only a way of supplying indirect evidence for the second, which is the one that really matters. An unbroken chain of custodianship does not in itself prove that records have not been corrupted, whereas if we could prove that records had not been corrupted, there would be no logical need to establish that custodianship had been maintained. However, since it is difficult to obtain direct proof that records have not been corrupted, evidence of an unbroken chain of custodianship serves, at least for traditional records, as a surrogate for such proof. Intrinsic properties of the entity may be completely ignored using this tactic, which relies on the authenticity of documentation of the process by which the entity has been preserved as a surrogate for the intrinsic authenticity of the entity. This has a somewhat recursive aspect, since the authenticity of this documentation must in turn be established; however, in many cases, this is easier than establishing the authenticity of the entity itself. Alternatively, an intrinsic properties strategy can be based solely on the intrinsic properties tactic discussed above. This involves identifying certain properties of an informational entity that define authenticity, regardless of whether they imply the originality of the entity. For example, one might define an authentic impressionistic painting as one that conforms to the style and methods of Impressionism, regardless of when it was painted or by whom. A less controversial example might be a jade artifact that is considered “authentic” merely by virtue of being truly composed of jade.9 Whether this strategy is viable for a given discipline depends on whether the demands that the discipline places on informational entities can be met by ensuring that certain properties of those entities meet specified criteria, regardless of their origin. Although there are undoubtedly other strategies, the final one I will consider here is to define authenticity in terms of whether an informational entity is suitable for some purpose. This suitability strategy would use various tactics to specify and test whether an informational entity fulfills a given range of purposes or uses. This may be logically independent of whether the entity is original. Similarly, although the suitability of an entity for some purpose is presumably related to whether certain of its properties meet prescribed criteria, under this strategy both the specific properties involved and the criteria for their presence are derived entirely from the purpose that the entity is to serve. Since a given purpose may be satisfiable by means of a number of different properties of an entity, the functional orientation of this strategy makes it both less demanding and more meaningful than the alternatives.10 The range of uses that an entity must satisfy to be considered authentic under this strategy may be anticipated in advance or allowed to evolve over time.11 Authenticity as Suitability for a Purpose In the context of preservation, authenticity is inherently related to time. A piece of jade may be authentic, irrespective of its origin or provenance; however, a specific preserved jade artifact has additional requirements for being authentic in the historical sense.12 The alternative strategies and tactics presented above for defining authenticity suggest the range of meanings that may be attributed to the concept, but all of these imply the retention of some essential properties or functional capabilities over time.13 Authenticity seems inextricably bound to the notion of suitability for a purpose. A possible exception is the case where originality per se serves as the criterion for authenticity. Such is the case, for example, for venerated artifacts such as the Declaration of Independence. Even if such an entity ultimately becomes unsuitable for its normal purpose (for example, if it becomes unreadable), it continues to serve some purpose-in this example, veneration. In all cases, therefore, authenticity implies some future purpose or use, such as the ability to obtain factual information, prove legal accountability, derive aesthetic appreciation, or support veneration. While recognizing that it is likely to be a contentious position, I will assume in the remainder of this paper that the authenticity of preserved informational entities in any domain is ultimately bound to their suitability for specific purposes that are of interest within that domain. At any point in time, it is generally considered preferable to be able to articulate a relatively stable, a priori set of principles for any discipline. For this reason, a posteriori criteria for authenticity may generate a degree of intellectual anxiety among theoreticians. Some archivists, for example, argue that archival theory specifies a precise, fixed set of suitability requirements for authentically preserved records, namely that future users should be able to understand the roles that the records played in the business processes of the organizations that generated and used them, and that users should be able to continue to use the records in any future business processes that may require them (e.g., for determining past accountability). Similarly, some libraries of deposit may require, to the extent possible, that future users be able to see and use authentically preserved publications exactly as their original audiences did. On the other hand, a data warehouse might require that authentic preservation allow future users to explore implicit relationships in data that the original users were unable to see or define. In different ways, all these examples attempt to allow for unanticipated future uses of preserved informational entities. They also reveal a tension between the desire to articulate fixed, a priori criteria for authenticity and the need to define criteria that are general enough to satisfy unanticipated future needs. This suggests that we distinguish between a priori suitability criteria, which specify in advance the full range of uses that authentically preserved informational entities must support, and a posteriori suitability criteria, which require such entities to support unanticipated future uses. The a priori approach will work only in a discipline that carefully articulates its preservation mandate and successfully (for all time) proscribes any attempt to expand that mandate retroactively.14 In contrast, an evolutionary, a posteriori approach to defining suitability criteria should be adopted by disciplines that are less confident of their ability to ward off all future attempts to expand their suitability requirements or those whose preservation mandates are intentionally dynamic and designed to adapt to future user needs and demands as they arise. Authenticity Principles and Criteria Because it is so difficult to define authenticity abstractly, it is useful to try to develop authenticity principles for various domains or disciplines that will make it possible to define authenticity in functional terms. An authenticity principle encapsulates the overall intent of authentic preservation from a given legal, ethical, historical, artistic, or other perspective-for example, to assess accountability or to recreate the original function, impact, or effect of preserved entities. Ideally, an authenticity principle should be a succinct, functional statement of what constitutes authentic preservation from a specific, stated perspective. Requiring that these principles be stated functionally allows them to be used in verifying whether a given preservation approach satisfies a given principle. For example, one possible archival authenticity principle was proposed above, namely, to enable future users to understand the roles that preserved records played in the business processes of the organizations that generated and used them, and to continue to use those records in future business processes that may require them. Alternative authenticity principles might be proposed for archives as well as for other disciplines. It would be desirable to devise a relatively small number of alternative authenticity principles that collectively capture the perspectives of most disciplines concerned with the preservation of informational entities. Next, from each authenticity principle, it is useful to derive a set of authenticity criteria to serve both as generators for specific preservation requirements and as conceptual and practical tests of the success of specific preservation techniques. For example, to implement the authenticity principle described previously, authenticity criteria would be derived that specify which aspects of records and their context must be preserved to satisfy that principle. These criteria would then provide a basis for developing preservation requirements, such as the need to retain metadata describing provenance, as well as tests of whether and how well alternative preservation techniques satisfy those requirements. The a priori/a posteriori dichotomy mentioned previously arises again in connection with authenticity principles. From a theoretical perspective, it is more attractive to derive such principles a priori, without the need to consider any future, unanticipated uses to which informational entities may be put. If authenticity principles are derived a posteriori, then they may evolve in unexpected ways as unanticipated uses arise. This situation is unappealing to many disciplines. In either case, if authenticity is logically determined by suitability for some purpose, then an authenticity principle for a given domain will generally be derived, explicitly or implicitly, from the expected range of uses of informational entities within that domain. It may, therefore, be helpful to discuss ways of characterizing such expected ranges of use before returning to the subject of authenticity principles and criteria. Describing Expected Ranges of Use of Preserved Informational Entities If expected use is to serve as a basis from which to derive authenticity criteria for a given discipline or organization, then it is important to describe the range of expected uses of informational entities that is relevant to that discipline or organization. This description should consist of a set of premises, constraints, and expectations for how particular kinds of informational entities are likely to be used. It should include the ways in which entities may be initially generated or captured (in digital form, for digital informational entities). It should include the ways in which they may be annotated, amended, revised, organized, and structured into collections or series; published or disseminated; managed; and administered. It should describe how the informational entities will be accessed and used, whether by the organization that generates them or by organizations or individuals who wish to use them in the future for informational, historical, legal, cultural, aesthetic, or other purposes. The description should also include any legal mandates or other exogenous requirements for preservation, access, or management throughout the life of the entities, and it should ideally include estimates of the expected relative and absolute frequencies of each type of access, manipulation, and use.15 Additional aspects of a given range of expected uses may be added as appropriate. Any attempt to enunciate comprehensive descriptions of ranges of expected uses of this kind for digital informational entitiesespecially in the near future before much experience with such entities has been accumulated-will necessarily be speculative. In all likelihood, it will be over-constrained in some aspects and under-constrained in others. Yet, it is important to try, however tentative the results, if suitability is to serve as a basis for deriving authenticity criteria. Deriving Authenticity Principles from Expected Ranges of Use The purpose of describing an expected range of use for informational entities is to provide a basis from which to derive a specific authenticity principle. Any authenticity principle is an ideal and may not be fully achievable under a particular set of technological and pragmatic constraints. Nevertheless, stating an authenticity principle defines a set of criteria to which any preservation approach must aspire. Different ranges of expected use may result in different authenticity principles. One extreme is that a given range of expected uses might imply the need for a digital informational entity to retain as much as possible of the function, form, appearance, look, and feel that the entity presented to its author. Such a need might exist, for example, if future researchers wish to evaluate the range of alternatives that were available to the author and, thereby, the degree to which the resulting form of the entity may have been determined by constraint versus choice or chance. A different range of expected uses might imply the need for a digital informational entity to retain the function, form, appearance, look, and feel that it presented to its original intended audience or readership. This would enable future researchers to reconstruct the range of insights or inferences that the original users would have been able to draw from the entity. Whereas retaining all the capabilities that authors would have had in creating a digital informational entity requires preserving the ability to modify and reformat that entity using whatever tools were available at the time, retaining the capabilities of readers merely requires preserving the ability to display, or render, the entity as it would have been seen originally. Finally, a given range of expected uses may delineate precise and constrained capabilities that future users are to be given in accessing a given set of digital informational entities, regardless of the capabilities that the original authors or readers of those entities may have had. Such delineated capabilities might range from simple extraction of content to more elaborate viewing, rendering, or analysis, without considering the capabilities of original authors or readers. As in the data warehouse example cited previously, it might be important to enable future users to draw new inferences from old data, using tools that may not have been available to the data’s original users. As these examples suggest, it is possible to identify alternative authenticity principles that levy different demands against preservation. For example, the following sequence of decreasingly stringent principles is stated in terms of the relationship between a preserved digital informational entity and its original instantiation: - same for all intents and purposes - same functionality and relationships to other informational entities - same “look and feel” - same content (for any definition of the term) - same description 16 An authenticity principle must also specify requirements for the preservation of certain metaattributes, such as authentication and privacy or security. For example, although a signature (whether digital or otherwise) in a record may normally be of no further interest once the record has been accepted into a recordkeeping system-whose custodianship thereafter substitutes its own authentication for that of the original-the original signature in a digital informational entity may on occasion be of historical, cultural, or technical interest, making it worth preserving as part of the “content” of the entity, as opposed to an active aspect of its authentication. Similarly, although the privacy and security capabilities of whatever system is used to preserve an informational entity may be sufficient to ensure the privacy and security of the entity, there may be cases in which the original privacy or security scheme of a digital informational entity may be of interest in its own right. An authenticity principle should determine a complete, albeit abstract, specification of all such aspects of a digital informational entity that must be preserved. Since an authenticity principle encapsulates the preservation implications of a range of expected uses, it should always be derived from a specific range of this sort. Simply inventing an authenticity principle, rather than deriving it in this way, is methodologically unsound. The range of expected uses grounds the authenticity principle in reality and allows its derivation to be validated or questioned. Nevertheless, as discussed previously, since the range of expected uses for digital informational entities is speculative, the formal derivation of an authenticity principle may remain problematic for some time. Different types of digital informational entities that fall under a given authenticity principle (within a given domain of use) may have different specific authenticity criteria. For example, authenticity criteria for databases or compound multimedia entities may differ from those for simple textual entities. Furthermore, digital informational entities may embody various behavioral attributes that may, in some cases, be important to retain. In particular, these entities may exhibit dynamic or interactive behavior that is an essential aspect of their content, they may include active (possibly dynamic) linkages to other entities, and they may possess a distinctive look and feel that affects their interpretation. To preserve such digital entities, specific authenticity criteria must be developed to ensure that the entities retain their original behavior, as well as their appearance, content, structure, and context. Originality Revisited As discussed earlier, the authenticity of traditional informational entities is often implicitly identified with ensuring that original entities have been retained. Both the notion of custodianship and the other component concepts of the archival principle of provenance (such as le respect des fonds and le respect de l’ordre intérieur) focus on the sanctity of the original (Horsman 1994). Although it may not be realistic to retain every aspect of an original entity, the intent is to retain all of its meaningful and relevant aspects. Beyond the appropriate respect for the original, there is often a deeper fascination, sometimes called a fetish, for the original when it is valued as a historical or quasi-religious artifact. While fetishism may be understandable, its legitimacy as a motivator for preservation seems questionable. Moreover, fetishism notwithstanding, the main motivation for preserving original informational entities is the presumption that an original entity retains the maximum possible degree of authenticity. Though this may at first glance appear to be tautological, the tautology applies only to traditional, physical informational entities. Retaining an original physical artifact without modifying it in any way would seem almost by definition to imply its authenticity. However, it is generally impossible to guarantee that a physical artifact can be retained without changing in any way (for example, by aging). Therefore, a more realistic statement would be that retaining an original without modifying it in any way that is meaningful and relevant (from some appropriate perspective) implies its authenticity. The archival emphasis on custodianship and provenance is at least partly a tactic for ensuring the retention of original records to maximize the likelihood of retaining their meaningful and relevant aspects, thereby ensuring their authenticity. Tautologically, an unmodified original is as authentic as a traditional, physical informational entity can be. If we consider informational entities as abstractions rather than as physical artifacts, however, this tautology disappears. Although the informational aspects of such an entity may be represented in some particular physical form, they are logically independent of that representation, just as the Pythagorean Formula is independent of any particular physical embodiment or expression of that formula. An informational entity can be thought of as having a number of attributes, some of which are relevant and meaningful from a given perspective and some of which are not. For example, it might be relevant from one perspective that a given document was written on parchment but irrelevant that it was signed in red ink; from a different perspective, it might be relevant that it was signed in red yet irrelevant that it was written on parchment. The specific set of attributes of a given informational entity that is relevant and meaningful from one perspective may be difficult to define precisely. The full range of all such attributes that might be relevant from all possible perspectives may be open-ended. In all cases, however, some set of relevant logical attributes must exist, whether or not we can list them. This implies that retaining the original physical artifact that represents an informational entity is at most sufficient (in the case of a traditional informational entity) but is never logically necessary to ensure its authenticity. If the relevant and meaningful attributes of the entity were retained independently of its original physical embodiment, they would by definition serve the same purpose as the original. Furthermore, since it is impossible to retain all attributes of a physical artifact in the real world because of aging, retaining the original physical artifact for an informational entity may not be sufficient, since it may lose attributes that are relevant and meaningful for a given purpose. (For example, the color of a signature may fade beyond recognition.) Retaining an original physical artifact is therefore neither necessary nor sufficient to ensure the authenticity of an informational entity. Digital Informational Entities and the Concept of an Original The preceding argument applies a fortiori to digital informational entities. It is well accepted that the physical storage media that hold digital entities have regrettably short lifetimes, especially when obsolescence is taken into account. Preserving these physical storage media as a way of retaining the informational entities they hold is not a viable option. Rather, it is almost universally acknowledged that meaningful retention of such entities requires copying them onto new media as old media become physically unreadable or otherwise inaccessible. Fortunately, the nature of digital information makes this process far less problematic than it would be for traditional informational entities. For one thing, digital information is completely characterized by simple sequences of symbols (zero and one bits in the common, binary case). All of the information in a digital informational entity lies in its bit stream (if, as argued earlier, this is taken to include all necessary context, interpreter software, etc.). Although this bit stream may be stored on many different kinds of recording media, the digital entity itself is independent of the medium on which it is stored. One of the most fundamental aspects of digital entities is that they can be stored in program memory; on a removable disk, hard disk, CD-ROM; or on any future storage medium that preserves bit streams, without affecting the entities themselves.17 One unique aspect of digital information is that it can be copied perfectly and that the perfection of a copy can be verified without human effort or intervention. This means that, at least in principle, copying digital informational entities to new media can be relied upon to result in no loss of information. (In practice, perfection cannot be guaranteed, but increasingly strong assurances of perfection can be attained at relatively affordable cost.) The combination of these two facts-that digital informational entities consist entirely of bit streams, and that bit streams can be copied perfectly onto new media-makes such entities logically independent of the physical media on which they happen to be stored. This is fortunate since, as pointed out above, it is not feasible to save the original physical storage artifact (e.g., disk and tape) that contains a digital informational entity. The deeper implication of the logical independence of digital informational entities from the media on which they are stored is that it is meaningless to speak of an original digital entity as if it were a unique, identifiable thing. A digital document may be drafted in program memory and saved simultaneously on a variety of storage media during its creation. The finished document may be represented by multiple, identical, equivalent copies, no one of which is any more “original” than any other. Furthermore, copying a digital entity may produce multiple instances of the entity that are logically indistinguishable from each other.18 Defining Digital-Original Informational Entities It is meaningless to rely on physical properties of storage media as a basis for distinguishing original digital informational entities. It is likewise meaningless to speak of an original digital entity as a single, unique thing. Nevertheless, the concept of an “original” is so pervasive in our culture and jurisprudence that it seems worth trying to salvage some vestige of its traditional meaning. It appears that the true significance (in the preservation context) of an original traditional informational entity is that it has the maximum possible likelihood of retaining all meaningful and relevant aspects of the entity, thereby ensuring its authenticity. By analogy, we therefore define a digital-original as any representation of a digital informational entity that has the maximum possible likelihood of retaining all meaningful and relevant aspects of the entity. This definition does not imply a single, unique digital-original for a given digital informational entity. All equivalent digital representations that share the defining property of having the maximum likelihood of retaining all meaningful and relevant aspects of the entity can equally be considered digital-originals of that entity. This lack of uniqueness implies that a digital-original of a given entity (not just a copy) may occur in multiple collections and contexts. This appears to be an inescapable aspect of digital informational entities and is analogous to the traditional case of a book that is an instance of a given edition: it is an original but not the original, since no single, unique original exists.19 It is tempting to try to eliminate the uncertainty implied by the phrase maximum possible likelihood, but it is not easy to do so. This uncertainty has two distinct dimensions. First, it is difficult enough to specify precisely which aspects of a particular informational entity are meaningful and relevant for a given purpose, let alone which aspects of any such entity might be meaningful and relevant for any possible purpose. Since we cannot in general enumerate the set of such meaningful, relevant aspects of an informational entity, we cannot guarantee, or even evaluate, their retention. Second, physical and logical constraints may make it impossible to guarantee that any digital-original will be able to retain all such aspects, any more than we can guarantee that a physical original will retain all relevant aspects of a traditional informational entity as it ages and wears. The uncertainty in our definition of digital-original therefore seems irreducible; however, its impact is no more damaging than the corresponding uncertainty for physical originals of traditional informational entities. Although the definition used here does not imply any particular technical approach, the concept appears to have at least one possible implementation, based on emulation (Michelson and Rothenberg 1992; Rothenberg 1995, 1999; Erlandsson 1996; Swade 1998). In any case, any implementation of this approach must ensure that the interpreters of digital informational entities, themselves saved as bit streams, can be made perpetually executable. If this can be achieved, it should enable us to preserve digital-original informational entities that maintain their authenticity across all disciplines, by retaining as many of their attributes as possible. Conclusion If a single, uniform technological approach can be devised that authentically preserves all digital-informational entities for the purposes of all disciplines, the resulting economies of scale will yield tremendous benefits. To pave the way for this possibility, I have proposed a foundation for a universal, transdisciplinary concept of authenticity based on the notion of suitability. This foundation allows the specific uses that an entity must fulfill to be considered authentic to vary across disciplines; however, it also provides a common vocabulary for expressing authenticity principles and criteria, as well as a common basis for evaluating the success of any preservation approach. I have also tried to show that many alternative strategies for determining authenticity ultimately rely on the preservation of relevant, meaningful aspects or attributes of informational entities. By creating digital-original informational entities that have the maximum possible likelihood of retaining all such attributes, we should be able to develop a single preservation strategy that will work across the full spectrum of disciplines, regardless of their individual definitions of authenticity. FOOTNOTES 1. One could argue that if the key terms of any discipline are not susceptible to multiple interpretations and endless analysis, then that discipline has little depth. 2. Many programs are compiled or translated into some simpler formal language first, but the result must still ultimately be interpreted. The distinction between compilation and interpretation will therefore be ignored here. 3. Although this analogy is suggestive, it is simplistic, since the interpreter of a digital informational entity is itself usually an executable application program, not simply another document. 4. Any number of component bit streams can be represented as a single, composite bit stream. 5. The word rendering is used here as a generalization of its use in computer graphics, namely, the process of turning a data stream into something a human can see, hear, or otherwise experience. 6. Metadata and interpreter bit streams can be shared among many digital informational entities. Although they must be logical components of each such entity, they need not be redundantly represented. 7. In particular, saving the bit stream corresponding to the core content of such an entity is insufficient without saving some way of interpreting that bit stream, for example, by saving appropriate software (another bit stream) in a way that enables running that software in the future, despite the fact that it, and the hardware on which it was designed to run, may be obsolete. 8. New criteria based on newly recognized properties of informational entities may be added over time, as is the case when evaluating radiological properties of artifacts whose origins predate the discovery of radioactivity. 9. Here authenticity refers to a specific attribute of an entity (i.e., its chemical composition) rather than to the entity as a whole that is of concern for preservation purposes. 10. Although it is tempting to consider the suitability of an informational entity to be constrained only by technical factors, legal, social and economic factors often override technical considerations. For example, the suitability of an informational entity for a given purpose may be facilitated or impeded by factors such as the way it is controlled and made available to potential users. Therefore, if it is to serve as a criterion for authenticity, suitability must be understood to mean the potential suitability of an entity for some purpose, i.e., that which can be realized in the absence of arbitrary external constraints. 11. Because the strategy potentially leads to dynamic, evolving definitions of authenticity, it has a decidedly a posteriori flavor, which may be inescapable. 12. In the remainder of this paper, authenticity will be used exclusively in the context of preservation. 13. Whereas the originality strategy entails no explicit property or capability conditions (though some tactics for evaluating originality may rely on such conditions), it nevertheless implicitly assumes that simply by virtue of being original, an entity will retain as many of its properties and capabilities as possible. 14. Non-retroactive expansion can be accommodated by revising the corresponding suitability criteria for all informational entities to be preserved henceforth. 15. Future patterns of access for digital records may be quite different from historical or current patterns of access for traditional records, making it difficult to obtain meaningful information of this kind in the near future. Nevertheless, any preservation strategy is likely to depend at least to some extent on assumptions about such access patterns. The library community has performed considerable user research on the design of online public catalogs that may be helpful in this endeavor. For example, see M. Ongering, Evaluation of the Dutch national OPAC: the userfriendliness of PC3, Leiden 1992; Common approaches to a user interface for CD-ROM—Survey of user reactions to three national bibliographies on CD-ROM, British Library and The Royal Library, Denmark, Copenhagen, April, 1992; V. Laursen and A. Salomonsen, National Bibliographies on CD-ROM: Definition of User-dialogues Documentation of Criteria Used, The Royal Library, Denmark, Copenhagen, March 1991. 16. This requires preserving only a description of the entity (i.e., metadata). The entity itself can in effect be discarded if this principle is chosen. 17. In some cases, the original storage medium used to hold a digital informational entity may have some significance, just as it may be significant that a traditional document was written on parchment rather than paper. For example, the fact that a digital entity was published on CD-ROM might imply that it was intended to be widely distributed (although the increasing use of CD-ROM as a back-up medium serves as an example of the need for caution in drawing such conclusions from purely technical aspects of digital entities, such as how they are stored). However, even in the rare cases where such physical attributes of digital informational entities are in fact meaningful, that meaning can be captured by metadata. Operational implications of storage media—for example, whether an informational entity would have been randomly and quickly accessible, unalterable, or constrained in various ways, such as by the size or physical format of storage volumes—are similarly best captured by metadata to eliminate dependence on the arcane properties of these quickly obsolescent media. To the extent that operational attributes such as speed of access may have constrained the original functional behavior of a digital informational entity that was stored on a particular medium, these attributes may be relevant to preservation. 18. Even time stamps that purportedly indicate which copy was written first may be an arbitrary result of file synchronization processes, network or device delays, or similar phenomena that have no semantic significance. 19. Moreover, since there is no digital equivalent to a traditional manuscript, there can be no unique prepublication version of a digital informational entity. REFERENCES Erlandsson, A. 1996. Electronic Records Management: A Literature Review. International Council on Archives’ (ICA) Study. Available from http://www.archives.ca/ica. Horsman, P. 1994. Taming the Elephant: An Orthodox Approach to the Principle of Provenance. In The Principle of Provenance, edited by Kerstin Abukhanfusa and Jan Sydbeck. Stockholm: Swedish National Archives. Michelson, A., and J. Rothenberg. 1992. Scholarly Communication and Information Technology: Exploring the Impact of Changes in the Research Process on Archives. American Archivist 55(2):236-315. Rothenberg, J. 1995. Ensuring the Longevity of Digital Documents. Scientific American, 272(1):42-7 (international edition, pp. 24-9). _____. 1999. Avoiding Technological Quicksand: Finding a Viable Technical Foundation for Digital Preservation: A Report to the Council on Library and Information Resources. Washington, D.C.: Council on Library and Information Resources. Available from https://www.clir.org/pubs/reports/rothenberg/pub77.pdf. Rothenberg, J., and T. Bikson. 1999. Carrying Authentic, Understandable and Usable Digital Records Through Time. RAND-Europe. Available from http://www.archief.nl/digiduur/final-report.4.pdf. Swade, D. 1998. Preserving Software in an Object-Centred Culture. In History and Electronic Artefacts, edited by Edward Higgs. Oxford: Clarendon Press.
https://www.clir.org/pubs/reports/pub92/rothenberg/
Does Your Culture Make You Feel Bad About Being An Introvert? You Are Not Alone Psychologists explain that introverts who feel pressured to act in extraverted ways may feel trapped in inauthenticity. By Mark Travers, Ph.D. | May 11, 2022 A new study published in the Journal of Research in Personality suggests that introverts feel less authentic when acting in extraverted ways, but feel compelled to do so in cultures that uphold an 'extraversion ideal.' I recently spoke to psychologists John Zelenski and Isabella Bossom of Carleton University in Canada to understand the distinguishing features between momentary and long-term introversion and extraversion. Here is a summary of our conversation. What is 'the extraversion ideal' and how does it affect individuals? The 'extraversion ideal' is the preference in many Western cultures for people who embody more extraverted traits. In Western societies, it is seen as preferable to be more agentic, expressive, sociable, and comfortable receiving attention from others. Susan Cain has described this phenomenon in a very popular TED talk and book, and in a way that particularly resonated with some introverts who experienced the cultural preference negatively. There is also extensive research that demonstrates that extraverted people are, on average, happier than introverted people. If you are an extraverted person living in the USA or Canada, the cultural context may provide a good person-environment fit. However, if you are an introverted person living here, the person-environment fit may be lower, which could result in lower well-being and authenticity and pressures to act in more extraverted ways. It is worth noting that extraversion predicts happiness in cultures with less preference for the trait, but those links are also weaker. So, culture is likely only part of the reason why extraverts are happier. Your paper states that introverts engage in extroverted activity more than extroverts engage in introverted activity. Please could you further explain this phenomenon? States refer to how one is feeling, behaving, thinking, etc. in the moment with the idea that only a short time frame is under consideration (e.g., moods are temporary). Traits are the long-term characteristics of people that are relatively stable (e.g., over years, on average). One way to understand the link between states and traits is to view traits merely as averages of states over time. This can be applied to both authenticity and introversion-extraversion. It is possible to debate whether the trait is more than the sum (or average) of its moments, but this perspective comes with a recognition (and data) that suggest that people display a wide range of behaviors over time. Introverted people act extraverted and vice versa; however, when we take averages of those behaviors over time (e.g., a week or two), we find remarkable stability in these average levels. This suggests clear personality differences (i.e., the consistent differences in averages), even while we can also observe much moment-to-moment variation. From this perspective, extraverts are people who behave in extraverted ways more often than introverts. (Also, personality psychologists see introversion-extraversion as a dimension of difference where being more of one necessarily means being less of the other; still, there are many people who fall near the middle of that dimension and therefore not extreme introverts or extraverts.) What inspired you to investigate the topic of trait introversion-extraversion and state authenticity and what did you find? We were fascinated by the finding that introverts feel more authentic when behaving in extraverted ways. We wanted to test this effect and determine whether embracing extraversion always made people feel authentic. We thought that perhaps people have varying levels of strength of identification as an introvert or extravert as part of their identity. We anticipated that identification strength as an introvert or extravert would be distinct from one's trait levels (i.e., something you feel is important to you vs. something you merely have). Using an adapted version of an online debate task that was developed in Dr. Zelenski's laboratory, we assessed how debating for or against the benefits of extraversion would influence state authenticity for those strongly (versus weakly) identified with their introverted nature or extraverted nature. Study participants were Canadian university students who completed the study online. First, the participants answered a questionnaire about their traits (including extraversion), then they answered a questionnaire about their strength of identification as an introvert or extravert (and a few other, less focal things). Next, participants completed an online debate task where participants were randomly assigned to either a pro-extraversion or a con-extraversion condition. Their task was to debate the resolution 'it is good to be more extraverted than introverted.' After providing an argument to support their assigned position, participants were shown four rounds of opposing arguments that they had to rebut to support their position. Then, participants completed a questionnaire on their state authenticity and moods. In Study 1, we found that people who had strong introvert identities had lower state authenticity when they argued for the benefits of extraversion. We did not find any differences for trait extraversion. In Study 2, we found again that people with stronger introverted identities reported higher state authenticity when arguing against rather than for the benefits of extraversion. However, we also found that people with higher trait extraversion reported higher state authenticity when arguing for the benefits of extraversion. What are some of the benefits of an individual having high levels of self-behavior fit? In our study, we found that arguing in ways that were consistent with trait and identity (e.g., a highly identified introvert arguing against the benefits of extraversion) experienced more authenticity immediately afterward. It is important to note, however, that people's intuitions about the importance of fitting behavior to traits might exaggerate the reality. Our study was motivated by a series of studies that find acting extraverted is associated with good moods and feeling authentic for most people, including many introverts. Although there are a couple of other exceptions, our study is unique in finding the 'fit' pattern, and mainly for authenticity rather than mood here. We anticipated that measuring identities would be the key feature in finding the fit pattern; however, it is also important to recognize that debating for extraversion is not necessarily the same as behaving in an extraverted way. Participants did the task while sitting at a computer and typing (seems introverted), but they were asserting an opinion to engage in the debate (seems more extraverted). Did something unexpected emerge from your research? Something beyond the hypothesis? We did not anticipate that trait extraversion would produce differences, depending on the debate position; instead, we thought that the strength of one's identity would be the key to finding that fit matters. Across the two studies, the statistical details differed, but overall there was a pretty clear pattern suggesting that fit with both traits and identities seemed important to feeling authentic in the debate task. We are still keen to learn more about how introvert and extravert identities are potentially important and how they might differ from trait scores (indeed, traits and identities were not strongly correlated), but results raise the question of whether it might be particular situations (the debate here) rather than the identity-trait distinction that is key to fit. More research is needed.
https://therapytips.org/interviews/does-your-culture-make-you-feel-bad-about-being-an-introvert-you-are-not-alone
Southwestern Ham and Corn Chowder I created this chowder to help get rid of Thanksgiving leftovers and it has become a staple dinner for my family. It is hearty enough to be eaten as a main dish or an appetizer. Ingredients 6 Original recipe yields 6 servings Directions Cook's Note: Southwestern corn contains red bell peppers, poblano peppers, and black beans. You may continue to simmer chowder on low heat for longer if you would like to develop flavors more. Partner Tip Reynolds® Aluminum foil can be used to keep food moist, cook it evenly, and make clean-up easier. Nutrition Facts Per Serving:
https://www.allrecipes.com/recipe/245211/southwestern-ham-and-corn-chowder/
answer, more so, for the absurdity of the attempt itself, because, she says, a legend is “imaginary reality”, no representation of real historical facts. Indeed, a legend is always the result of the relentless alteration, over several centuries, of a tale that may have been originated just out of sheer imagination, or worse out of an intricate commixtion of imagination and reality; therefore, any attempt to read it as if it were newspaper news cannot but inescapably lead into a blind and futile labyrinth of illations and illusions. There is, however, at least one indisputable concrete fact, and it is that the legend, in our case the Fanes’ saga, does exist. There is no good reason to believe useless a priori the attempt to evaluate - rationally, analitically, cautiously, - the whys and hows that lead to this evenience: which circumstances, either in the area of reality or in that of imagination, may have triggered the process, and when; how the tale first took shape, and how this shape was distorted and altered over time, finally to acquire the shape we can see today. Our procedure must obviously start from the analysis of the structures and contents of the legend as it has been handed down to us. We must carefully avoid the risk of drifting away in a misty cloud of self-referenced abstractions, only apparently ballasted by the usage of very long and scientific-looking words. Therefore, we must quickly find the way to anchor our analysis to a robust coordinate system; this cannot but consist of the sequence of cultural backgrounds that the legend itself has crossed while being handed down one century to the next, since when it was told for the first time. In the study of the legend, it is of primary importance to discriminate between the critical discussion of the meaning that can possibily be attributed to the described situations (be they or not traceable back to real events) and the reconstruction of the cultural background that surrounds them. This background consists of the many small “environmental” details often almost inadvertently dispersed by the first storytellers, as an obvious part of their world. Later narrators usually repeat them “beacause they have always been there”, at times even not understanding their original meaning any longer. The original background of the story can be reconstructed by correlating them together, and on the other side by verifying the absence of those different details that would certainly have been present if the legend had been originated at a different time and within a different context. The capital importance of this cultural context, that can be read “between the lines” of the legend, resides in the fact that while, even in the best case, at the end of the analysis at least a wide margin of incertitude remains on the claimed historicity of the narrated events, on the contrary the background sometimes emerges crisp clear and hardly unmistakable, in the light of the data that have been made available today by modern research - historical, archaeological etc. The emerging environment not only allows assigning the legend to a well defined period, but sometimes attains the unexpected result to - at least indirectly - outline the schematic contours of the supposed historical events that might possibly have triggered its origin. This is exactly what happens in the Fanes’ case. It is very probable, however, that several legends handed down to us have no connection at all with really happened events: myths, fables, or just fiction conceived to glorify a hero or an ancestor, or the mix of all these elements together. Good. Anyway, let us take it the other side up. Even in a society that masters writing, and more so in an illiterate one, whenever a remarkable, “historical”, event takes place, it is memorized by its witnesses, who recount it to others. This may not yet be the starting point of a legend, but already contains all required elements to become one, if the socio-cultural situation is favourable. We can state, therefore, that at the root of at least a few of the legends that were handed down to us from a misty past, there might have been events which today we would define as historical. There is plenty of examples of legends long believed to be just myths and later on confirmed by undisputable archaeological evidences: from Troy to Rome of the early kings. Obviously, the process through which the narration of a really happened event can become a legend is long and complex; it involves the heavy and repeated distortion of the first-hand reports, which in turn may not have been completely trustworthy and exhaustive. However, if we have some knowledge of the psychological and motivational processes that lead to the transformation of an historical fact into a legend and to its later modifications, as well as of the cultural background within which these processes took place, it is conceivable, in principle, to follow the same route backwards and understand whether the legend has been assembled from a core of real occurrences or not, and what these may have been. It is clear that by this method we shall never be able to collect a solid system of documental evidences, possessing an absolute historical value: at most we shall obtain a web of clues, strongly connected however with a framework already known otherwise, within which they can at least assume the value of a direction for further research. It may well be, anyway, that at the end of the process of legend dismantling, we remain empty-handed, that is, we must conclude that no really occurred event lies at the root of the legend. Sometimes, maybe very often, this will be the result, and paradoxically it will contribute to the validation of the method we have followed. Obviously, we must carefully keep away from the capital mistake of fitting the collected elements into a pre-conceived mental picture, more so if this picture is the one we would like to see emerging. To avoid such a mistake, we have no other choice but using no background picture at all, out of what turns out from the objectively known data. These may be available with reference to geography, geology, climatology, archeology, history, ethnology, linguistics and whatever else may be pertinent: so that our research assumes an essential feature of multi-disciplinarity. Obviously, it is not required being an expert at every single discipline (more so, being specialized in one or more might even lead to a slightly distorted vision of the matter): what is needed is being correctly informed on the results of them all. Each single step of the analytical procedure of research described above must be taken, therefore, having in mind this framework of independent pieces of information, beyond the mere internal coherence of the reconstruction. We also ought to keep in mind that, whenever several different scenarios, each one fitting the available data, appear as a possible outcome, it is advisable not to discard any and consider them as equally possible, at most providing each of them with a careful evaluation of its relative probability. A special attention must be paid also to the presence of different legend variants; be it that they can be ascribed to the versions of different eyewitnesses (and these are the most clarifying ones), be it that they must be attributed to later modifications, because in this case they contribute to clarify how the legend was perceived by the storytellers of a given age: and this aspect also can be significant for decoding it. I wish to make very clear that I have no intention to cast doubt over the methods and the results of anthropological research, when they explain the collective inconscious mechanisms that lead, over time, to assign specific symbolic significance to themes, concepts or characters of a myth or of a legend. This research has the purpose of taking account of the imaginary components of the legend; beyond any doubt, these components are very often present, sometimes alone, sometimes mixed with the remembrance of real facts in an almost unextricable fashion. Anyway, the overlapping of these fictional components does not at all exclude that an historical root may have triggered the storytelling mechanism; for this reason the two methods of research are certainly complementary and, far from negating each other, on the contrary they may validate each other’s results. I believe that the above described procedure, when used honestly and carefully, may bring to propose sustainable and not trivial interpretations, at least in some happy cases. I have to admit that I am no specialist: I’m not even sure whether what I’ve been proposing above is already well known and even old-fashioned, or it contains some new elements. I tried to apply these concepts to the analysis of the Fanes’ kingdom saga – almost for fun, at first – and the results I have eventually obtained were partially a surprise for me too. Obviously, I do not believe them to be the end of the story, but just a step, that I hope to be of some interest and significance, on the route of a knowledge process that is still far from having been completed. The most delicate part of the process through which a legend may be generated out of an historical fact is no doubt that of the first or at most the second generation after the occurrence: the stage when eyewitnesses are still alive and, consciously or not, “decide” what to hide and what to tell, and how to tell it. It is well known that there are psichological effects according to which, even in absolute good faith, but generally according to what the audience expects, some episodes or details may be deleted from memory, and other ones may be even invented, so that often the witness himself (of course variably from one individual to another) can be self-convinced to remember the events differently from the way he would, or he did, report them just after they had happened. When there are several eyewitnesses, as it often occurs, and not all of them witnessed exactly the same events, or witnessed them from different physical or mental points of view, it may easily happen that a collective consensus arises about a versions that consists of a weighted mean of many different reports; this version is finally reported as true even by those who, according to what they had witnessed themselves, would have reported it quite differently. All of this happens every day in police offices and in the courts of justice. Until now I only discussed inconscious mental processes, i.e. those happening in total good faith. But we must also take into account that certainly at least a part of the eyewitnesses had good or bad reasons to wilfully conceal a part of the truth or to inflate another, while on the other hand we can be sure that episodes possibly not eyewitnessed by anyone will be reconstructed out of guesswork, and no one will label them as such. This said, it can be taken for granted that, just a short time after the facts, the version reported as standard must be carefully filtered if we want to extract anything similar to what happened in reality. Nor can we forget that consensus is by no means the only psychological process to be active: in every case someone is going to claim his own version as true, different from the “official” one, often (but not always) with some fundament. Thus we can expect that what will be handed down to the next generations will be a “standard” version, obtained through a more or less generalized consensus, with a small number of variants, diverging on details that may even be of some importance. Obviously, if this holds true for the sequence of the events, it holds even more for the motivations that lead to those events, for the intentions and the sentiments of those people who accomplished them; intentions and sentiments that are an integral part of the story, conferring significance and depth to it, but allow widely different interpretations, even more than concrete facts, and may be easily misunderstood or willfully distorted. This substantially is the way legends are born, but it also is the way History is born, because up to this moment there is no basic difference in method, and things change very little even if someone takes care to write them down quickly. When we say that it’s the winners who write down History, we basically mean that: not only the interpretation of the facts, but the facts themselves, take a completely different hue and meaning, and may look to have been different, according to the relative position of the witnesses who are authorized to recount them, as well as according to the emotions and expectations of their listeners. A problem apart is how the legend, once constituted, may be handed down. Some people claim that the oral transmission of historical facts cannot last longer then three to four generations after the events. Others talk of “three centuries”. Both these limits are quite probably reasonable, case by case, but only if they are referred to family memories in a society where the official recording of historical facts is entrusted to the written word, and the act of handing down reminiscences is not perceived as a social effort of any real importance for the collectivity. There are, on the contrary, several instances of events that, in the absence of written recordings, have been handed down orally over much longer time lapses, even if at times heavily distorted and transformed into legend or even myths. There is no need to refer to societies very far away from ourselves, we can take as examples the tales about the war of Troy before Homer composed his Iliad, or the stories concerning the kings of Rome, today confirmed by archaeology in their essence, before they were frozen on paper by the historians of the late Republican age. We cannot forget, however, that the creation of myths – tales that represent in their essence a moment of clarification about the big questions of existence, at both an individual and a social level, a source of certainties, a conceptual reference point to which the whole cultural structure of a collectivity is anchored – is an unescapable need of primitive societies. Often these myths are just built around the lives of great men who actually existed, modified according to the needs in such a way as to make them unrecognizable and rationally not any longer plausible. These effects must be as far as possible removed; as well as the distortions that may have been introduced willfully, for instance on political purposes. With the above mentioned exceptions, in an illiterate society there is a good chance that the legends have a tendency to stabilize after the first few generations, because the emotional impact of the narrated events decreases, and there are no longer ideological or practical interests to modify their reporting again. On the contrary, the adherence to the original model is constantly considered as an important measure of the narration quality, and therefore of the narrator’s ability as well. Problems arise as far as time goes by, when the cultural background itself, within which the events have occurred, inevitably modifies. The meaning of several original details may become not any longer understandable at all. While the legend plot usually is preserved, what may happen thence is that the details that appear weird and puzzling are not suppressed (owing to the “principle of conservation”), but better reduced to trifles, or put aside in a corner; or their presence is justified, a way or another, by means of logic twists or even the insertion of fictional passages having no correspondence with the original context. On the other hand, we can observe the easily explained tendency to naïvely insert new descriptive details that belong to the storyteller’s world, like renaissance artists who painted biblical characters dressed as contemporary people. These contaminations, which anyway have no impact on the story, are easy to identify as such and can be easily removed, although one cannot rule out the risk of mistaking a detail that by chance might work both in the later period and in the original background. Not difficult to recognize, but much more delicate to remove, - maybe impossible - is the occurrence of characters having been turned into archetypes. Those who once were men and women in flesh and blood, with a personality of their own, with complex feelings and motivations, over time are gradually turned flat and adherent to stereotypical models of behaviour, or on the other hand are identified with their role, like puppets or characters of an improvised comedy. This process can advance so far, that even their name may be completely forgotten (a process made easier when the language, used to hand the legend down, changes as well), or replaced by another character’s name, no matter if historical or mythical or related to a different legend, but in any case defining the archetype on which the character is unrecoverably categorized. At the same time it may happen that events that have repeatedly occurred along time, or happened stepwise, involving maybe different persons playing the same role (like, e.g., several generations of kings) are resumed and synthesized as if they were a single event that occurred to a single person, who adds up personality and deeds of each single person who was actually involved. More specifically, there is a good chance that the complex occurrence of a social or cultural evolutionary process, that storytellers maybe can perceive as such, but can’t effectively express through an abstract concept (what legends always evade), is condensed into the narration of a single episode, maybe derived from the circumstances of one really happened event, which therefore assumes the features of a symbolic representation of the whole process. Even worse, it may happen that storytellers take characters, initially separated even by a wide time span, and melt them into a single one, because they can be seen as “re-embodiments” of the same archetype, and they act within a gross situational equivalence. Entire passages of totally different legends can thus be overlapped and mixed together. By this procedure real fictional chimeras can be created, the parts of which can be separated again only on the basis of the hopeful difference of backgrounds; the risk that in such an operation some pieces are shifted to the wrong side, is always present. Last, it may well happen that elements or themes of a legend are considered discreditable according to the ethic, political or religious beliefs of a later epoch, and as a consequence are ironed out or masked or even drastically suppressed from the tale. Having considered all this, is there still nowadays any concrete hope to unravel the knot, unwind the process back and remove the distortions that, consciously or not, have been applied? Of course there is no universally valid answer to this question. First, it is clear that we are defenceless against a legend that constitutes, totally or partially, an authentic “historical novel”, i.e. a narration of fully fictional events, but perfectly and coherently framed within a cultural and situational background that really existed. This risk must always be accounted for, although, to our good luck, it seems that this type of fiction, which belongs to a much different intellectual environment, has very little chance to be assembled and handed down by oral transmission. This said, we can state that a complete and objective knowledge of the facts intrinsically represents a limit that can be approached, but never attained, not even if we were able to immediately collect the reports of each single eyewitness. Every subsequent distortion process generates a further unknown element, that may be corrected only provided we can recognize it as such and un-apply it back, on the basis of our knowledge of the historical and cultural context that has caused its application. Such a reconstruction will be as closer to reality, as our analysis of the hows and the whys of the original distortion will improve. Even so, we may be able to recognize, but hardly to reconstruct, the details that may have not just been altered, but bluntly removed. We must also notice that the result, obtained by applying a given filter to a given scenery, is univocally determined, while it is far from certain that the same result may not be obtained by applying the same filter, or a different one, to a different original scenery. As a consequence each step, even if we are able to recognize the presence of a distortion, increases the fuzziness of the reconstructed original environment, but also increases the probability that in its correction we may have introduced a serious blunder. Once we have brought the procedure of reconstruction of the original core of the legend to an end, against all odds, we must also understand who its witnesses may have been, how large and which part of the story each of them may really have been acquainted with, if and how much he may have had an interest or may have taken pleasure in distorting, concealing or inventing: and how much the process of consensus with the audience may have been active, and in which direction. Only at this point, it will be possible to hazard stating whether real occurrences, and what, may have played a role in the formation of the legend.
http://ilregnodeifanes.it/inglese/research2.htm
A very good material for the indicated coating process. An acceptable material for the indicated process; however, there may be some special process modifications required, such as temperature. Material can be coated with the indicated process; however, possible material stability or composition issues may result. Not recommended. This material is absolutely inappropriate for the indicated process: do not attempt. Please Note the Following: - This is only a sample listing of materials and should not be considered definitive. Information has been generalized – please contact a Richter Precision Inc. representative regarding your specific application. - All parts, regardless of coating process, should be sent to us already heat treated to your required hardness. In the case of the CVD process, parts will be annealed during coating and then re-heat treated afterward. However, being hardened prior to coating will reduce stresses and distortion during coating - When considering the PVD process, whenever appropriate for the material, we recommend that final draws be > 800° F in order to ensure that no annealing and/or distortion will occur.
http://fusioncoatings.pk/material-coating-process-compatibility/
McKinney ISD's graduations will be held on Monday, May 23 at Credit Union of Texas Event Center. Times are as follows: McKinney North High School at 9 a.m.; McKinney Boyd High School at 1 p.m.; McKinney High School at 5:30 p.m. Live streaming will be available for each ceremony at www.mckinneyisd.net/graduation/. Special Dietary Needs The U.S. Department of Agriculture’s (USDA) National School Lunch Program (NSLP) and School Breakfast Program (SBP) require that schools make substitutions to the regular meal for students who are unable to eat the meal because of their disabilities. This need must be certified by a licensed physician. Substitutions are required for students that have: - A physical or mental impairment that substantially limits one or more major life activities including eating - Food allergies that may result in severe, life- threatening (anaphylactic) reactions These conditions meet the definition of “disability” and substitutions prescribed by the licensed physician must be made. Substitutions for Students without a Disability Menu substitutions or modifications are only made for medically-documented needs. Food Allergies or Intolerances Children with food allergies or intolerances that are not defined as life-threatening (anaphylactic) reactions are not considered to have a disability. It is not required that food substitutions for non-life threatening allergies or intolerances be accommodated. However, the MISD Child Nutrition Department is happy to work with families on a case-by-case basis to accommodate these needs. Medical documentation is required before any menu substitutions or modifications will be considered. Religious or Personal Preferences Menu substitutions or modifications will not be made for personal requests, including religious or personal preferences. Nutrition information is available at School Dish to help students plan their meals in a way that fits with their preferences. A note can be added to your student’s account as a reminder to the cafeteria staff. However, it is the student’s responsibility to choose foods that fit their dietary needs and preferences. If you would like a note added to your student’s account regarding specific eating preferences (i.e. vegetarian, no pork) please contact the school nutrition dietitian Melissa Silva, [email protected]. Make a Diet Modification Request Child Nutrition must receive a signed Food Allergy/Disability Substitution Request form or signed statement the student’s treating physician before any menu substitutions or modifications will be considered. A new form is not required each year. However any changes to a student’s health needs must be updated and include a signed statement from the student’s treating physician. All Food Allergy/Disability Request Forms can be sent directly to dietitian, Melissa Silva, [email protected]; 469-302-6377. Please click the link below to download the food allergy & disability request form:
https://www.mckinneyisd.net/school-nutrition/special-dietary-needs/
Phu Quoc is an island of culinary exploits, boasting numerous specialties that cause visitors to salivate. One of the most favored dishes is raw herring. As Vietnam’s largest island covering an area of nearly 600 square km, Phu Quoc is bestowed with abundant and valuable the a point if cultural convergence, as its residents hail from Vietnam, Kampuchea, and China. Most of the immigrants are from Quang Ngai province. This multi-cultureless result in an original flavor of Phu Quoc people, but especially regarding their culinary culture. Dishes in Phu Quoc often carry central Vietnam’s flavors, the savory taste of Chinese cuisine, or the sweet overtones known to Khmer food. Of the many delicious dishes on the island, raw herring is particularly unique. The key ingredient is the fresh herring. In comparison with other kinds of herring available in Vietnam, Phu Quoc herring has firmer, more delicious meat with a higher level of protein. Raw herring is prepared rather simply. The fish is cleaned and decaled. Then its belly is cut and the fillets on either side are removed. The meat is doused in lemon juice to evaporate. The rice papers used to roll the raw fish are soaked in young coconut milk, which yields a sweet fragrance and keeps the papers soft enough to roll easily. The dipping sauce has the pungent flavor of Phu Quoc’s famous fish sauce, which is mixed with peeled peanuts, sour shrimp, ground garlic and pepper. Sliced tomatoes, cucumbers, pineapple, coconut meat, lettuce, and mint leaves are rolled together in the rice paper with one or two slices of raw herring. The roll is dipped into the sauce, and then eaten. All these ingredients create a delicious dish with the peculiar blended taste: the fresh herring, the aroma of the fish sauce, the buttery taste of peeled peanuts and Phu Quoc coconut. Visiting Phu Quoc, your taste buds will undoubtedly be bombarded with the endless island delicacies. But you should not miss the chance to raw herring, as this dish is unlike any others in Vietnam. It is an example of true fusion and, in that, a beautiful trait of Vietnam’s culinary culture. This article written by Lanh Nguyen from Vietnam Heritage Travel For original article, please visit:
http://wellnessarticles4u.com/a-unique-flavor/
This is an updated Ubuntu 15.10-compatible Ambiance modification which includes Unity and GTK+3 widget theming. Again, it was originally inspired by a mockup by Lucas Romero Di Benedetto: https://plus.google.com/104438618743851678614/posts/WL5iugZPnXd However, the goal is to copy the new official theme for Unity 8 as closely as possible, with authenticity in mind.. To install, extract the AmbianceTouch folder and place it in ~/.themes/ . Then use a tool like Unity Tweak Tool to set it. Note that while installed, this theme may override the default Ambiance theme original, but it shouldn't. If it does and you want the original back, just remove the added folder and reload your theme. Also note there is no Radiance variant in this version yet, but I will be working on it and will post it separately once it is ready. Credit on the original Ubuntu themes package goes of course to the Ubuntu Artwork Packagers team on Launchpad (https://launchpad.net/~ubuntu-art-pkg) and related contributors to the original unmodified theme. Some assets were also borrowed from the Ubuntu UI Toolkit (https://launchpad.net/ubuntu-ui-toolkit). In accordance with the COPYING file in the original package, this modified theme is also distributed under the GNU GPLv3 license. Whats is your icon theme? Sorry, I can't reproduce this issue on my own system. I'm using Unity on Ubuntu 15.10 and the current version of the theme only modifies the window titlebars and Unity shell theme; no GTK changes yet. How does your system behave with the original Ambiance theme? Please update gtk2 because it doesn't fit with gtk3. And change name in the index.theme to Ambiance-Touch (or anything else) and folder name, because Ambiance is default theme and already exists there. Both of these will eventually be addressed. If you want the original Ambiance theme back, simply uninstall this one until I get a new name set in the theme files; also, I will finish the GTK+3 modifications first before porting them over to GTK+2. Thanks for the feedback! Thank you for a quick response here. So waiting for finalized version.
https://www.xfce-look.org/p/1013554/
I have a confession to make. A few days ago, I headed to Tony Hu's Lao Beijing in Chinatown with the sole purpose of ordering a plate of General Tsao's chicken. On the one hand, heading to Chinatown, and especially to one of Tony Hu's much lauded restaurant concepts, to order a dish most commonly enjoyed in shopping mall food courts across middle America, seems to be missing the point. The dish in its present incarnation was designed for, promulgated by, and remains popular largely among Western diners, primarily because of its exotic-but-not-too-challenging nature. It is a dish reminiscent of the "flavors of the orient," bringing along with it all the negative cultural stereotype baggage associated with that phrase. Yeah, I Saïd it. On the other hand, who am I as a food writer to feel qualified to enter into this conversation with even a semblance of credibility? Sure, I do way more research than reasonably expected before dining at ethnic restaurants. But as this essay on authenticity pointed out, it's difficult even for the actual experts to explain the nuances of cuisines across cultures to Westerners. Add terroir, slight regional preparation differences, and general lost-in-translation issues, and it seems rather elitist and narrow minded of food writers to scoff at such dishes, regardless of our well intentioned attempts at understanding the food we are eating. We walk a delicate line in this regard: on one side is seeking out dishes the way they are intended to be enjoyed, respecting the cuisine and its preparers by encouraging them not to pander too much to our Western tastes. On the other side is our more general purpose as food enjoyers at large: to seek out and celebrate dishes that simply taste good. Which brings us to Lao Beijing's General Tsao's Chicken ($10.95). The dish arrived to the table as I knew it would. Large, irregular chunks of breaded and fried chicken, barely charred green and red bell peppers, crunchy baby corn, and a few dried red chilies for good measure, all swimming in a viscous peppercorn specked brick red sauce. Spooned over the accompanying steamed rice, the dish was comforting in its familiarity, but self assured in its preparation. This was no ordinary carry-out General Tsao's Chicken. The sauce was sweet but not cloying, spicy but not fiery or mouth numbing. The barely cooked vegetables added crunch and a slight bitterness that contrasted well with the gently flavored sauce. And the chicken was on point—moist and flavorful on the inside, with the breading still maintaining its crunch underneath all that sauce. This dish was hardly reinventing the wheel, but it was undoubtably the best version of General Tsao's Chicken I've ever had. And at the end of the day, isn't that really what all this is about?
http://chicago.seriouseats.com/2012/05/tgi-fry-day-general-tsaos-chicken-at-lao-beijing.html
My favorite cultural dish is “La Bandera” which literally translates to “The Flag” in English. This staple dish originated from the Dominican Republic and has been served for over 181 years on the island since 1840 when the Dominicans declared their independence from Haiti. You may even ask how La Bandera represents the red, white, and blue? The meat represents the blue in the flag, which stands for liberty. It is a colorful meal that matches the colors of the country’s flag and consists of rice, red beans, meat, and salad. The dish itself is very popular because it has a unique mixture of African, Spanish, and Taino Indian herbs and sauces. Growing up as a child with mixed ethnicities, my parents took it upon themselves to introduce both my cultures to me. They both accomplished this with food and flavors that satisfied my food pallet. My father was half Dominican so he would actually teach me how to make some of his favorite childhood dishes. I remember my father always waking me up early to start seasoning the meat and getting all the ingredients put together for lunch. He would tell me stories about his ancestors and how he was passing down the secret ingredients to me to teach my children one day. The cultural dish became my favorite because it was a way I could feel connected to my roots without needing to have been in that country itself. It is also my favorite because it is nourishing and delicious to eat at any time of the day. I would say it is traditional for other Dominican families, however for me it is rarely made although it is my favorite. I have tried to make it but it doesn’t taste the same as it did when i was a little child. All the love and hard work put into it is not the same without my dad. However, I hope to make it again one day and maybe even with my own child because then it might taste better.
https://thehoovercardinal.org/6884/entertainment/family-traditions-handed-down/
Kashmiri food is a culinary dance that plays on all the senses. The flavors and a spices incorporates the rich diversity of the people of Kashmir. Kashmiri’s are hearty meat eaters but the history and the cuisine is immersed the cross cultural influences of earthy foods, farm and hillside vegetables and spiritualism through food. This dish is simple. Turnips and beans may seem like unusual companions but they result in an amazing offering. My dad sent me an old cookbook a few years back and I ventured into experimenting this dish. Do try it. It is food for the soul and senses. I use a pressure cooked to make this dish but you can use a heavy bottomed pan. The beans need to be cooked with the turmeric until firm but not too soft. Fry the turnips in a shallow pan with oil until browned and drain on a paper towel. Use the same pan and add garlic, cumin, cinnamon, red chillies, fennel and ginger powder. Fry in the oil, add a splash of water to soften and then put this mixture in the beans ( again, I am using a pressure cooker). Add the turnips, whole chilies and the asafetida ( this is optional). Let it pressure cook for about 5 minutes until the turnips have softened. Same idea if you are using a heavy pan to let this cook. Serve with white or brown rice and a side of good yoghurt. This sounds delicious, what a great combination of spices. And I love your photos! Thanks for sharing with FF#65.
https://foodforthesoul00.com/2015/04/25/red-beans-with-turnips-kashmiri-cuisine/
The cuisine of India is one of the world’s most diverse cuisines, characterized by its sophisticated and subtle use of the various spices, vegetables, grains, and fruits grown across India. The cuisine of every countryside includes a good assortment of dishes and cooking techniques reflecting the numerous demographics of the ethnically diverse Indian subcontinent. India’s religious beliefs and culture have played an influential role in the evolution of its cuisine. Vegetarianism is widely practiced in many Hindu, Buddhist, and Jain communities. India’s unique blend of cuisines evolved through large-scale cultural interactions with neighboring Persia, ancient Greece, Mongols, and West Asia. New World foods like chili peppers, tomatoes, potatoes, and squash, introduced by Arab and Portuguese traders during the sixteenth century, and European cooking styles introduced during the colonial period added to the range of Indian cuisine. Best Indian Recipes you’ll have traveled all across the planet trying all kinds of cuisines, but once you need your food, that’s once you realize that there is nothing quite like Indian food. The aromatic curries, masala-packed fries, biryani and parathas, they work miraculously to lure you into their spell. So prepare yourself to dive into a world of spice-packed, flavor, and fragrance rich Indian food. From paneer makhni to Kerala-styled prawns, from mutton Rogan josh to Parsi eggs, every dish is an exceptional mixture of spunky ingredients and different cooking techniques. indian food recipes India’s regional and cultural diversity reflects beautifully in its food and is possibly the most reason why Indian food outranks that of other countries. Each Indian state has its own unique pandora of flavors and ingredients. Even the spices they use are their concoction and made up of scratch: dhansak masala, panch phoron, garam masala, chicken tikka masala, and lots of more. Indian food features a few distinct characteristics that make it ‘truly desi’; Its generous use of spices like ajwain, dalchini, cloves, black cardamom, star anise, dhania, and tamarind. Its affinity for marrying flavors and most significantly its array of addictive street food. Crisp pan puris, mind-blowing papri chaats, and steaming hot aloo tikkis. soup Definition of soup. 1 : a liquid food especially with a meat, fish, or vegetable stock as a base and often containing pieces of solid food. 2 : something having or suggesting the consistency or nutrHot soups are additionally characterized by boiling solid ingredients in liquids in a pot until the flavors are extracted. salad A salad is a dish consisting of a mixture of small pieces of food, usually vegetables or fruit. However, different varieties of salad may contain virtually any type of ready-to-eat food. and a vegetable suitable for eating raw. appitizer An appetizer is a small dish of an overall meal. It can also be a drink or multiple drinks containing alcohol. Common examples : shrimp cocktail, calamari, salad, potato skins,chicken 65,chicken magestick,chicken lollypop,pepper chicken, crispy corn,paneer 65. An appetizer may also be very elegant in some restaurants. main course The main dish is usually the heaviest, heartiest, and most complex or substantial dish in a meal. The main ingredient is usually meat, fish, rice or another protein source. It is most often preceded by an appetizer, soup and salad. and most of the indian main cources are hyderabadi biryani,peas pulao,palak paneer burji,jeera rice, aloo paratha, ect… dessert Dessert is a course that concludes a meal. The course usually consists of sweet foods, such as confections and ice creams, and possibly a beverage such as dessert wine or liqueur; however, in the United States it may include coffee, cheeses, nuts, or other savory items like cup cakes,cool cakes. baverage A drink is a liquid intended for human consumption. In addition to their basic function of satisfying thirst, drinks play important roles in human culture. Common types of drinks include plain drinking water,soft drinks,mocktails, cocktails, milk, coffee, tea, hot chocolate, juice and soft drinks andra cuisine andra cuisine is a cuisine of South India native to the Telugu people from the states of Andhra Pradesh and Telangana. Generally known for its tangy, hot and spicy taste, the cooking is very diverse due to the vast spread of the people and varied topological regions. andra pradesh the spice region the rice bowl of indian specilaties. tandoori chicken ingredients 1. 4 skinned Chicken Quarters 2. 2 tbsp juice (Nimbu Ka Raas) 3. 1 clove (Lasun) 4. 1-inch piece peeled and coarsely chopped Fresh Ginger (Adrak) 5. 1 Green chili (Hari Mirch) 6. 1 tbsp Water 7. 4 tbsp Natural Yogurt (Dahi) 8 1 tsp Ground Cumin (Jeera) 9. 1 tsp Garam masala 10. 1 tsp Paprika 11. 1 tsp Salt (Namak) 12. A few drop of yellow coloring 13. 2 tbsp melted Ghee 14. For garnishing: 15. Lemon Wedges 16. Onion (Pyaj) rings making :- Make 3-4 cuts in each chicken quarter employing a knife. Put the chicken in an ovenproof dish. Combine juice. Rub it into the incisions. Cover it. Let it marinate for about half-hour. Combine garlic, ginger, and green chili and water during a blender. Grind to form a smooth paste-like mixture. Combine the paste to yogurt, ground cumin, garam masala, paprika, salt, coloring, and therefore the melted ghee. Mix all the ingredients well. Spread them over marinated chicken pieces. Coat the pieces with the yogurt marinade. Cover it. Let it marinate at temperature for about 5 hours. Turn once or twice maximum. Place chicken during an oven at 325 F. Let it roast for 1 hour. Bast frequently and switch once. The chicken should be tender and most of the marinade should be evaporated. Then grill the chicken over hot charcoal. Garnish it with lemon wedges and onion rings.
https://www.elishafood.in/
Summer finally arrived about a week ago and it’s been too hot to cook. Too hot for heavy meals and just generally too hot. My favorite thing about this dish adapted from this Shape.com recipe is it’s served chilled and for a light dinner, it is really filling. Ingredients: Well-marbled New York steak or a skirt steak, grilled (we broiled and pan-seared a couple of times when it was too hot to grill) to your liking, but preferably no more than medium rare 1 small red onion, sliced into skinny wedges 1 bunch cilantro, coarsely chopped, no stems Large handful cherry tomatoes, sliced in half (or 1/2 vine-ripened tomatoes cut into wedges) 1 green pepper, sliced thin 2 limes (or about 1/4 to 1/3 cup lime juice) 1 tablespoon brown sugar (you may need more. Note: the original recipe called for palm sugar, but we’ve had a hard time finding it) 1.5 tablespoon fish sauce Thai chili powder to taste (we found a blend at Penzy’s Spices that we love for this dish – Bangkok blend) Directions: Grill, pan-sear or broil the meal as directed above and slice into thin strips after allowing to rest at room temperature for 15-20 minutes. Put the brown sugar in a medium-sized mixing bowl, add some of the lime juice and mush into a thick liquid form. Add the rest of the lime juice, fish sauce and palm sugar. When the steak has cooled, Add it to the sauce mixture. Toss with your hands to incorporate all over the steak. Add the rest of the veggies, toss and taste. Tasting as you go is the most important part. Add more fish sauce if it needs more salt, or more sugar if it is to lime-y. Chill in the refrigerator for at least an hour or so to let the flavors meld. Then serve on a bed of lettuce and garnish cilantro. My opinion: You can add the cilantro into the sauce mixture, just give it a rough chop. If you want to quick sautee your veggies in peanut oil (or your favorite cooking oil), feel free it adds another layer of flavor. This is the perfect dish for a hot summer evening or a refresh from the indulgences of holiday food.
https://aurorameyer.com/tag/limes/
How can virtual visualisation support decision-making in the restoration of historical interiors? In 2018, conservator in training of historic interiors Santje Pander, won the '4D Research Lab' launch award for her project on the UNESCO Press Room, by the renown Dutch architect and furniture maker Gerrit Rietveld. The room was designed for the UNESCO headquarters in Paris in 1958, but had become redundant and old-fashioned by the 1980s, after which it was dismantled and shipped back to the Netherlands for safekeeping by the Cultural Heritage Agency of the Netherlands (RCE). In recent years, the room has been brought back into attention, and was revaluated, which led to ideas about its possible reconstruction (recently a space has been found for the interior by the RCE). For her MA thesis, Santje studied the possibilities of reconstructing specifically the linoleum surfaces of the room, which were designed as a unique pattern of shapes and colour that covered both floor and furniture. She proposes various alternatives for the reconstruction of the floor. The main choice regards the reconstruction of the linoleum floor using linoleum from the current FORBO (the original manufacturer) collection, or using a newly produced reconstruction of the old linoleum. For the latter option, two alternatives were proposed: reconstruct the linoleum to match the aged and faded colours of the furniture, or reconstruct the linoleum 'as new', based on samples found in the FORBO archives. An important consideration is whether the reconstruction respects the original intensions of Rietveld, who designed the floor and furniture (and in fact the entire interior) as a unity. The concept of unity was especially important since the architecture of the room itself impeded a sense of unity due to its irregular shape, and awkward positioning of structural colums. The digital 3D reconstruction of room and furniture Although Santje's main focus was on the elements covered with linoleum, it was clear from the start that in order to to gauge the effect of certain choices on the perception of the room, the entire space had to be digitally reconstructed. This included features such as walls covered in different vinyls, wooden painted cabinets of various types, mirrors, windows, furniture with vinyl upholstry, concrete architectural elements, and of course the TL-lighting. A unique object was the so-called 'world-map table', a table with a light box type tabletop, which featured a map of the world. Fortunately, the original design drawings were preserved, as well as many (but not all) of the original objects. During modelling, the designs were compared with the photographic evidence and the preserved pieces in the depot, which reveiled only small divergences between design and execution. Hence, certain details aside, the reconstruction of shape and dimensions is generally of a high degree of certainty. As an added benefit of the modelling process, we gained some insights regarding certain design decisions by Rietveld, which we discuss in more detail in the project report. Reconstructing colour and gloss For the reconstruction of the colours, we used colour measurements that Santje performed on the original linoleum samples and cleaned surfaces of the original furniture. The colour measurements were originally done with a X-rite Minolta i7 spectrophotometer, but we noticed that these diverged from the colours as measured on photographed samples, even though the light conditions of the spectrophotometer were matched by the studiolights. So we used both, to see if there was a noticeable effect on the reconstruction. In restoration science, much attention is paid to accurate recovery of material properties such as colour and gloss of a surface. Subtle differences may detract from the experience of the authenticity of an object. However, accurate digital reproduction of these properties is not an easy task. The scientific approach would be to objectively measure colour and gloss, and then to enter these values into the 3D modelling program. This is not as simple as it seems. Colour is nothing more than certain wavelengths of light being interpreted by our brain, which 'colour-codes' it for us on the fly. This helps us to distinguish different kinds of objects. Colour perception varies across our species, so it is is very hard to objectively define colour. Also, colour is dependent on light: the same object has a different colour or tint under different environmental lighting conditions. So when we 'measure' colour, we basically measure a surface under specific conditions. Usually, this is 'daylight', which is a soft whitish light that we arbitrarily define as 'neutral'. However, in 3D modelling programs you create another virtual environment with lamps with specific properties, which means that the surface with the measured colour value is lit again, but under different conditions (in the case of the Pressroom: TL-lighting), creating yet another colour. And it becomes even more complex, since we also have to deal with the fact that there exists no single system to store and represent colour ('colour spaces'), and the digital model we use on devices (RGB) is a strong simplification of our own perception. Long story short, to match the colour and appearance of an object in a 3D program with simulated lights is ultimately a subjective process of trial and error. Gloss on the other hand is basically the result of the microscopic roughness or bumpiness of a surface. The rougher a surface is, the more light gets dispersed, the more matt a surface appears. The smoother it is, the more it reflects light back to the observer. The smoothest surfaces are mirrors. There are devices that measure gloss, which was used by Santje in her material study. However, the resulting values cannot be simply entered in the 3D program we used (Blender), since it uses an entirely different model for computing gloss. So our method was to closely observe the original linoleum samples and linoleum floors in the real world, and try to match this in the 3D modelling program. Rendering We created multiple renders with different material settings from the same perspective in order to compare the effects on the perception of the room. On purpose we chose a viewpoint that matched one of the historical photographs, so it was possible to compare this directly to the digital reconstruction. As the 1958 colour photos have known issues regarding the representation of colour, the marked difference was an interesting result that calls for reflection on how accurate our reconstruction is and how faded colour photos can cause a wrong impression of the original room. The perceptual difference between the room in which modern alternatives of the colours are applied and those in which original colours are applied is especially striking. The difference between the images which show variations of the original colours ('as new', and 'aged'), is less perceivable. Although the actual RGB values are notably different when viewed next to each other in isolation, if applied in the room itself, differences are only noted after very close examination. It may be that the multitude of visual stimuli in the entire picture make it very hard for our brains to perceive small differences. Reliability The question remains whether these results are reliable enough to be used in the restoration decision-making process. There are multiple factors of uncertainty, the method of digital colour and gloss reproduction being an important one. Another factor is that we do not exactly know the original light conditions inside the room. We know that TL-lamps were used, but not exactly their power and light temperature. Based on these uncertainties, it can be argued that it is questionable that we have accurately recreated the interior. The model should therefore be considered as such, a working hypothesis about the physical appearance of a lost space. But we must not forget that an authentic recreation has in this case never been the aim. Moreover, it is quite unlikely that modifying the uncertain variables within reasonable bounds would have changed the outcome of the study significantly. Nevertheless, to model colour and lighting more accurately based on real world measurements, the digital methods we use also must improve. A virtual visit The project got a nice spinoff in the form of an online 3D tour through the room, made in collaboration with the RCE. For this application we expanded the model to complete the room, and it was integrated with stories about the room from a design perspective. Of course, for this application we can only show one of the versions that we recreated. As a side note in respect to the above, the modifications and conversions necessary to be able to render the model in the browser create again a slightly different version of the room. This underlines the importance for us, researchers in the humanities, to understand and be transparent about the technical procedures and cognitive processes that lead to the creation of such digital 3D representations.
https://4dresearchlab.nl/tag/rietveld/
The legacy of Istanbul, once Constantinople, the capital of four empires, is greater than meets the eye. It's the Bosphorus, Black Sea, the Aegean, and the Mediterranean; Mesopotamia, Balkans, Anatolia, Caucasus, North Africa, and Persia. It's the palace and the countryside, the business capital and the cultural capital. It's a little Turkish, a little Greek, Armenian, Jewish, Arabic and Persian. It's Istanbul Modern. We tell the story. Menu detail Simit and Yogurt Butter Appetizer Simit is the most consumed street food in Istanbul. It resembles a pretzel but it's baked with molasses and sesame seeds. On the streets of Istanbul it's served with 'smiling cow' brand cheese, the little triangles. We break bread with a yogurt butter. Mezedes served with Armenian Lavash and Lebanese Pita Small Plate We serve our mezedes family style small plates across the table. Here are the mezedes we will be serving at this event: Flash cured albacore 'lakerda' Mother-in-Law Armenian dolmas Black Hummus bi Black Tahini with Crispy Lamb Tongue Two seasonal mezes driven by the farmers market Yuvalama with a Yogurt and Mushroom Broth Small Plate Bulgur meatballs; chickpeas; mushrooms; stewed beef; and yogurt, sumac and mint broth; Aleppo, Urfa and Maras peppers This is a dish that captures the essence of the flavors of Mesopotamia. It's served warm, not hot. Wild Pistachio Braised Short Rib with Orange and Pistachio pilaf Entree Short rib; melengic and dar-ul fulful Rice; candied orange; baklava pistachios; herbs; orange blossom and bay leaves This dish showcases the complex flavor of melengic, the Mesopotamian wild pistachio. The sauce should reminisce coffee, chocolate flavors with a pistachio theme. The pilaf is the bright note, served family style.
https://www.tastemade.com/laura---sayat/experiences/7950-istanbul-modern-rc-alumni-2
Multimedia data becomes more and more relevant for applications that require a certain level of trust in the integrity and the authenticity of documents. Examples include scanned contracts and documents which integrity needs to be verified, photos or video clips attached to news reports which contents should be provably authentic or recordings of interviews which shall be used as evidence in the future. The possibility to store documents in digital form raises new challenges with respect to the recognition and prevention of forgeries and manipulations. By using a powerful personal computer and sophisticated image editing software, even an inexperienced user is able to edit a picture at will, e.g. by adding, deleting or replacing specific objects, thereby creating “perfect” manipulations that do not introduce visually noticeable traces (Cox et al., 2001; Zhu et al., 2004). It is very hard, if not impossible, for a human to judge whether a multimedia document is authentic only by visual inspection. As a result, the old proverb “words are but wind, but seeing is believing” is not true anymore in the digital era. Multimedia document authentication tries to alleviate this problem by providing tools that verify the integrity and authenticity of multimedia files. In particular those tools detect whether a document has undergone any tampering since it has been created (Zhu et al., 2004). In this chapter we focus on tools that operate on raw data (such as sequences of image pixels or audio samples) instead of compound multimedia objects, as this is the focus of current research. Depending on the application scenario, three different approaches – media forensics, perceptual hashing and digital watermarking – can be found in the literature. The field of media forensics tries to examine a multimedia document in order to decide whether it is authentic or not. No prior knowledge on the document is assumed. Technically, these schemes look for suspicious patterns that indicate specific tampering. In addition, it is sometimes possible to determine the device that was used to create the document (such as a scanner or camera). Note that document forensics differs fundamentally from steganalysis. The latter tries to detect and decode any secret imperceptible messages encoded within a document, while forensics deals with the examination of document authenticity and integrity; steganalysis is thus out of scope of this chapter. While promising approaches exist to uncover tampering, more reliable results can be achieved if a potentially tampered document can be compared to its “original” version. This operation is usually harder than it seems, as media documents may undergo several processing steps during their lifetime; while these operations do not modify the visual content of a document, its binary representation does change. For example, media files are usually stored and distributed in compressed form. Such compression methods are often lossy and will render the decompressed data slightly different from the original copy (e.g. the JPEG format does not store perceptually insignificant parts of an image). Besides compression, the data may also undergo other incidental distortions such as scaling. Thus, the binary representation of media documents cannot directly be compared to each other. Perceptual hashes provide an automated way of deciding whether two media files are still “perceptually identical”, for example whether one document is a copy of another one, which was processed without changing its semantics. A hash is a short digest of a message, which is sensitive to modifications: if a document is severely changed, the hash value will change in a random manner. Hashes can be used to verify the integrity of an object if the hash of the “original” is stored at a trustworthy place, such as a notary. During verification, the document is hashed and the hash is compared to the hash of the original. If the hash differs, the document is assumed to be modified.
https://www.igi-global.com/chapter/challenges-solutions-multimedia-document-authentication/71047
Caramelized Pork Belly (Thit Kho) This dish is very popular in Vietnamese households for everyday eating but is also traditionally served during Tet, the Vietnamese Lunar New Year. The longer you cook the pork belly, the more tender it becomes. If you make this dish ahead, the fat will congeal on the surface, making it easier to remove, and a little healthier! This also allows the flavors to meld a little more. Serve with rice. Recipe Summary Ingredients 6 Original recipe yields 6 servings Directions Cook's Note: Check occasionally while the pork is simmering that the liquid doesn't evaporate too much. Add water a little at a time if sauce seems to be drying out. Nutrition Facts Per Serving:
https://www.allrecipes.com/recipe/245887/caramelized-pork-belly-thit-kho/
|Elliott Sound Products||Valves (Vacuum Tubes) - Harmonic and Intermodulation Distortion| There is a long running and generally false belief that second harmonic distortion is 'nice', that even order distortion is preferable to odd-order distortion, and that valves (in particular) produce second harmonic distortion. This apparently (and supposedly) is the dominant reason that valve guitar amps sound 'better' than transistor amps. Firstly, second harmonic distortion never exists in isolation. It is impossible to obtain only second harmonic distortion - there will always be traces of third, fourth, fifth, etc. in the final waveform. Single-ended valve and transistor stages (both power amps and preamps) do generate predominantly second harmonic distortion, but it is not isolated. The other frequencies will always be present, although they may be at a relatively low level. Secondly, any harmonic distortion also results in intermodulation distortion (IMD), and that is the main topic of this article. There is nothing nice about IMD, unless it is part of the player's sound in the case of musical instrument amplifiers (guitar, bass, keyboards, etc.). In any reproduction system such as a home hi-fi, IMD adds components to the sound that were not in the recording. While a small amount of IMD will often be difficult to hear, it has always been desirable to reduce it to the absolute minimum. The invention of negative feedback was not designed simply to reduce simple harmonic distortion, although it did that as a matter of course. Harold Black invented the concept in 1927, and it was intended to solve an increasingly troublesome problem - intermodulation distortion. He worked in the telecommunications sector at Western Electric (and eventually at Bell Labs), and IMD was a major problem with early long distance multi-channel carrier transmission systems. The goal was to minimise the intermodulation products that created havoc when two or more separate signals existed on a single telecommunications transmission line. Note: For a far more in-depth look at the phenomenon described here, please refer to Intermodulation - Something New To Ponder. The article shows bot measurements and simulations, and includes sound files that can be used to verify that my findings are real and easily reproduced. When a single ended stage starts to approach clipping (guitar amp preamps, single-ended output stages, etc.), the distortion is almost always asymmetrical. One polarity of the waveform is distorted while the other remains (relatively) clean. Somehow, it is believed that this is nicer than symmetrical distortion, which by its very nature produces almost exclusively odd-order harmonics. Because of the misconceptions that abound, I decided to run some tests to see if there were a way to demonstrate the difference between symmetrical and asymmetrical clipping. As it turns out, asymmetrical clipping is actually worse than I thought, as described below. As a guitar effect there will undoubtedly be players who will find it useful, but I seriously doubt that anyone would like to have no alternative. Because of the number of possibilities for distortion, a simple clipping circuit was used because this provides higher levels of distortion (making it easier to hear and measure), and also means that the circuit is easily duplicated by anyone else who wishes to do the tests for themselves. Regardless of the type of (harmonic) distortion, intermodulation distortion (IMD - generally agreed by everyone to be the very worst kind of distortion) is always one of the results. IMD creates additional frequencies that are said to be the sum and difference of the original frequencies in the input waveform. When a complex musical passage is the source, the IMD products can be quite extraordinary. The result is serious aural confusion of the signal, where what used to be an orchestra with different instruments becomes a 'wall of sound'. While the tests described here are deliberately exaggerated, the principles remain the same even at much lower distortion levels. There is no form of non-linearity that will fail to produce intermodulation distortion, so the goal for hi-fi is always to minimise intermodulation distortion. Since low intermodulation demands high linearity, simple harmonic distortion is also reduced. The holy grail of analogue design has always been the mythical 'straight wire with gain' - an ideal amplifier. The ideal amplifier is one that has infinite bandwidth and input impedance, an output impedance of zero ohms, and can provide infinite current. Naturally, it has no distortion whatsoever. While readily available as mathematical models in simulators, the real world and the laws of physics prevent us from obtaining one. When confined to a set of parameters that are suitable for audio reproduction, many modern amps come so close to the ideal that it is difficult to measure any major deviation from the ideal case. Certainly, distortion figures are commonly so good that any distortion (of any type) produced by the amplifier will be well below the threshold of audibility. In some cases even traditional measurement limits are bettered and distortion can only be measured using special techniques. This is how it should be. When we look at valve (tube) amplifiers, the situation is not so good. Many fine valve amplifiers have been built, and some were almost as good as today's well designed transistor amps. There are also a great many new designs that fail to meet the most basic standards of high fidelity. Especially with guitar amps, distortion is not just a fact of life, but a requirement for a great many players. In the case of hi-fi, it's generally not something that should ever be heard, but for very low powered systems (less than 10W) it is inevitable that programme material with a wide dynamic range will distort during loud passages if anything more than very modest SPL is required. In the case of single-ended valve stages, they will generate increasing levels of predominantly second harmonic distortion as the level is increased. With sufficient level, such amplifiers will almost invariably clip asymmetrically, producing allegedly 'nice' even-order distortion. Push-pull stages will clip symmetrically, and this gives 'bad' and 'horrible' odd-order distortion ... or so we are told. It is a fact of life that a properly set up push pull stage will cancel most even-order distortion, and any stray second harmonic distortion is almost totally cancelled. Presumably, this is the reason that so many people seem to like single-ended (especially triode) amplifiers. Note that with a push-pull output stage, only even-order distortion generated in the output stage is cancelled - any distortion produced by prior stages becomes part of the signal and cannot be removed or cancelled. Interestingly, apart from a few very small low-budget and/or practice amps, all guitar amps over about 5W are push-pull. Look at Fender, Marshall, Ampeg, Boogie, Vox (UK made models), Australian amps like Lenard, Vase, Strauss ... the list is endless, and they all have push-pull output stages. So much for the claims of second harmonics - remember that a push-pull stage cancels the second harmonic, and the output distortion consists of predominantly odd-order harmonics. Indeed, looking at the circuits for most of the popular guitar amps shows that the vast majority use drive stages that remain symmetrical until the power stage is in gross overload. There are a few amps that do not clip symmetrically, and these are not amongst the popular brands because they sound bad when driven hard. One of the most bizarre comments I think I have ever read is "Cross-over distortion is a non-musical type of distortion, and isn't as pleasing to hear as 'harmonic distortion'". I saw that remark in the 'The Tube Amp Book (4th Edition)', and it simply cannot be left unchallenged. In general terms the statement is right - crossover distortion is particularly objectionable, but to state that it's different from 'harmonic distortion' is 100% wrong. Crossover distortion is harmonic distortion, and in terms of the harmonics created it's pretty much the same as clipping but with different phase relationships. What makes it objectionable is that it occurs at low levels, and gets worse as the level is reduced. As with all forms of distortion, it adds intermodulation products and often sounds much worse than an ill-advised simple measurement might indicate. It's imperative that when measuring low-level (crossover) distortion, the distortion waveform must be examined on an oscilloscope, and preferably listened to through speakers or headphones as well. Failing to monitor the distortion residual is one of the things that gave early transistor amps a bad name. Full power distortion numbers might have been impressive, but crossover distortion was often high enough to be audible at low listening levels. Unfortunately, statements like the above tend to gain 'authority' the more they are repeated, and the Net is the perfect breeding ground for this. It's important to ensure that fact and fiction (the latter includes 'semi-facts' and 'factoids') are understood for what they are. For a given percentage of distortion and the same measurement bandwidth, the harmonic structures of clipping and crossover distortion are close to identical, the primary difference is phase. Having said all that, it's quite true that crossover distortion is probably the most intolerable of all types of distortion, because as noted above it gets worse as the level is reduced - exactly the opposite of what we expect to hear. But - it's still a form of harmonic distortion. Figure 1 shows the test setup I used in the simulator to measure the results from clipping circuits. Two signal generators are used, one producing a 1kHz sinewave and the other producing 1.1kHz - both at 1V peak (707mV). The signals are mixed together, giving a composite signal with a voltage of 494mV. This would have been 500mV with no load, but there is a small load to the clipping circuits that reduces the level slightly. Figure 1 - Test Circuit Used In Simulator One of the advantages of a simulator is that it's easy to use very low value mixing resistors - as you can see, they are R1 and R2, at 10 ohms each. To build the circuit, these resistors would need to be much higher in value, and would need a buffer stage prior to the clipping circuits. R3 and D1 form an asymmetrical clipping circuit, and only the positive peaks are clipped. R4, D2 and D3 form the symmetrical clipper. The output voltages for each output are shown - the symmetrical clipper has a lower output voltage because more of the signal is clipped off by the diodes. Figure 1A - Test Circuit Used For Listening Tests To listen to the effects, I used the above circuit. This allows you to listen to the original composite tone, as well as the two different clipped waveforms. If you don't understand the concept of intermodulation distortion, then I urge you to try this. I don't expect that many people will have access to a spectrum analyser, but for those who do you will see waveforms very similar to those shown below, depending on the resolution of the analyser. The goal is to ensure that the concepts are understood. If you don't realise what's happening, a small amount of intermodulation and second harmonic distortion may well sound as if the music is 'enriched' somehow (I shall refrain from using any of the meaningless reviewer terms). In reality, you're hearing things that simply were not in the original recording. Whether you like this effect or not is immaterial, what is important is that you understand the reasons that cause it to sound different. Different is rarely better, but this seems to have been lost in the clutter of nonsense that surrounds the audiophile fraternity, where different seems to mean 'better' in most cases. I find this puzzling - I fully expect that any of my designs should sound much the same as any other, with the differences being output power, convenience, size or other design goal. Any two amplifiers of good performance should sound the same, and if any difference exists there will be a good reason for it. The nonsense you may hear that some amps are hugely better than others is just silly - there is no logical or scientific reason that two amplifiers of similar overall specification can possibly sound different from each other. Strangely, the amps that are supposedly superior almost always have more distortion, higher output impedance and worse frequency response than their 'inferior' brethren. The basic criteria for hi-fi were established a long time ago, and have improved over the years, yet we have some reviewers claiming that valve equipment that was below par 50 years ago is better than transistor amps that trounce these 'new-old' amps in every respect. It's very hard not to be cynical. The tests I did are repeatable by anyone, and although the end result is exaggerated it does demonstrate the principles of both total harmonic distortion (THD) and IMD. Although I have only shown the results for IMD, it's also important to turn down one of the signal generators so you can also hear the difference between the symmetrical and asymmetrical distortion on a single sinewave. Vary the signal level so you can get a feel for the audibility of low-level distortion on a sinewave (which is far more audible than with music). It is possible to hear less than 0.5% THD on a single sinewave. Figure 2 - Output Waveforms Of Each Clipping Circuit The voltage waveforms from each clipping circuit are shown above. As you can see, the symmetrical clipper limits the peak voltage to ±600mV, but the asymmetrical circuit only limits the positive side, the negative side reaches -1V peaks. Not surprisingly, the asymmetrical waveform has slightly less harmonic distortion, at 13.1%. The symmetrically clipped waveform measures a THD of 15.5%, so in theory it should sound worse (although in truth, both will sound pretty awful). The interesting measurement is not THD though - intermodulation distortion (IMD) is the sworn enemy of quality reproduction, and anything that reduces IMD is a bonus. By using the FFT (Fast Fourier Transform) facility in the simulator, it is possible to look at both the harmonic and intermodulation products of each clipped waveform, and that's where the big surprise lies. The two FFT traces are shown below. Without distortion, there would be two vertical peaks - one at 1kHz and the other at 1.1kHz. Everything else you see below is the direct result of distortion. Harmonic distortion is created for the two input frequencies, and intermodulation distortion is based on the mixture of the original frequencies, their harmonics, as well as sum and difference frequencies based on every frequency - originals plus distortion components. Figure 3 - Harmonic & Intermodulation Products Intermodulation creates additional sum and difference frequencies, so from 1kHz and 1.1kHz, we get 100Hz and 2.1kHz. Because the original frequencies are distorted, sum and difference frequencies are also generated for the harmonics. It stands to reason that fewer harmonics means fewer intermodulation products, and lower IMD is (and has been for a very long time) the goal of most designers. It is interesting to note that symmetrical clipping does not create the difference frequency! Those frequencies around 100Hz (1.1kHz - 1kHz = 100Hz) are completely missing. 100Hz is very visible on the green (asymmetrical) trace though, and it's also very audible. It is immediately obvious that there are many more intermodulation products in the green trace than in the red, and the green trace is the asymmetrically clipped waveform. Sum and Difference products are quite obvious in both, but the symmetrical waveform (almost) completely lacks the frequencies at and around 100Hz, 2kHz, 4kHz, 6kHz and 8kHz (and beyond of course). Note that the asymmetrical waveform not only includes the frequencies missing from the symmetrical waveform, but also includes all of the frequencies one expects in the harmonic structure. In other words, the asymmetrical waveform contains not only the supposedly 'nice' even-order harmonics and intermodulation products - it also contains all of the supposedly nasty odd-order ones as a bonus. There is absolutely nothing to gain by using circuits that produce even-order harmonics, because the intermodulation products are far worse than the simple harmonic distortion may indicate. Looking at the measured levels of the intermodulation products, we see that ... While the above seems counter-intuitive, it is easily tested using a pair of signal generators and a couple of diodes and resistors. There is nothing at all pleasant about the sound of the clipped waveforms - both sound pretty awful. The asymmetrically clipped waveform actually does sound marginally less harsh than the symmetrically clipped waveform, and it also has one characteristic that makes it sound 'better' - it creates bass! The difference frequency of 100Hz is quite audible, and this helps to trick your ears into thinking that it sounds 'nice'. By comparison, while the signal is still obviously distorted, the symmetrical clipping circuit has a marginally harder sound overall, and it lacks the bass (difference) signal so will almost invariably be judged to sound worse. This is what you will hear from most experts, and will read in countless books and websites. The fact is that both sound dreadful, and all measures possible must be used to minimise all forms of distortion in order to keep intermodulation distortion low. It is extremely important that anyone who doubts the claims made above builds the test circuit and listens for themselves. Every claim I've made can be reproduced and tested easily - that's why the complete circuit details are included. Unless people fully understand all aspects of distortion (and what it does to the music), we will continue to hear nonsense about negative feedback somehow 'ruining' the sound and similar silliness. There are innumerable claims that guitarists in particular like 'nice' even order distortion and dislike 'nasty' odd order distortion products. This is a very difficult claim to reconcile with reality, because almost all professional guitar amps use push-pull (symmetrical) output stages, and these cancel even order harmonics, leaving only the odd order distortion products. Of course there will be some even-order (and normally low level) distortion created in the preamp stages, and this cannot be removed by the output stage, but as noted, this will be fairly low level only, and usually doesn't contribute a great deal to the final signal delivered to the speaker. Figure 4A - Typical Valve Guitar Amp Preamp The above shows a typical guitar amp preamp, which is naturally all Class-A, single-ended. It's set up for two channels with a master volume, but only one channel is shown. If this preamp is driven hard and the master volume is set low to get distortion at low volume, the result will probably not be as you expect. There are plenty of opportunities for the valves to be overdriven, but doing so will create asymmetrical distortion and it will most likely sound very ordinary. Note that the diagram is not intended to reflect any particular amplifier, it's simply an example. The voltages shown are AC (RMS) and are based on 10dB loss in the tone controls and at each volume control. Anyone who has worked on valve guitar amps will be aware that the preamp stages are generally fairly clean, unless driven very hard indeed. Even with a 100mV input signal, the first preamp valve will only have an output of around 5V, since the first gain stage will typically operate with a gain of around 50. Throughout the remainder of the preamp, most gain stages are either modest, or follow the tone stack (for example) which has a considerable overall loss. Depending on the tone settings, the tone stack can have as much as 20dB loss (divided by 10), but as an example we'll assume the loss is 10dB for the settings used. The following stage might have a typical gain of 25 or so. This stage is sometimes called the 'post' amplifier, because it's after the tone stack. There are many different arrangements used and it's not possible to try to analyse them all, but many forum sites have complaints of 'master volume' amps that generate very unpleasant distortion if the preamp gain is increased too far, but with the master volume turned down. The descriptions vary, but 'spitting', 'harsh', 'thin' and similar adjectives are often used. This happens because the Class-A preamp stages are pushed into heavy distortion, and because the distortion is highly asymmetrical. Contrary to popular belief, the majority of guitarists prefer symmetrical clipping. For example, almost all 'fuzz boxes' clip symmetrically, and this wasn't done to stop people from buying them! At rational volume settings and even with high gain preamps, expect the level from any of the preamp stages to be no more than about 10V RMS. This is still more than enough to drive the phase splitter and output stage to hard clipping. If a master volume is provided, then these levels can be much higher if the master is set low and the volume control is advanced to close to maximum (volume at eleven, anyone ). The result is asymmetrical distortion, possible 'blocking' (where valves are turned off due to grid current and take time to recover) and a distortion 'tone' that most guitarists find very unpleasant. For those who don't know the term, blocking occurs when the input signal is large enough to forward bias the control grid. Current flows as a result, and the input capacitor charges so that when the signal stops or is reduced, the valve can be completely turned off until the capacitor discharges. Meanwhile, any low level signal is blocked (cut off) and higher level signals are half wave rectified. The resulting sound is always very unpleasant, and can be described as 'spitting' or perhaps 'farting'. To prevent blocking, the input signal to a valve must never be allowed to exceed the voltage at the cathode. The next drawing is the power stage. The voltages are again signal levels in RMS, and are representative only. The phase splitter has a gain of 6, and the output stage has a gain of 22. These gains are all within normal range for a valve guitar amplifier. The output transformer has a primary impedance of 3,700 ohms plate-plate. All voltages shown are with the level set just below clipping. Figure 4B - Typical Valve Push-Pull Output Stage Now, we can look at the even-order distortion cancellation that takes place in the output stage. In the following, we see the output from a single output valve as it approaches clipping (red trace), and the resulting waveform when two valves with identical performance are summed in the output transformer (green trace). The second valve in the output stage has the same waveform as the red trace, but it's shifted by one half-cycle because of the phase splitter. The second waveform is not an inverted copy of the red trace! Note that for analysis, it is essential that the negative feedback is disabled, otherwise it will try to over-ride what you are trying to see. Figure 5 - Single Output Valve Vs. Push-Pull The asymmetry in the red waveform is clearly evident, and the distortion measures about 12.8%. When two such identical waveforms are shifted by 1/2 cycle and summed in the transformer (the normal case for an output stage), the result is the clean waveform shown in the green trace, with only 0.5% distortion. This is a dramatic reduction. Please note that this is from a simulation and 'real world' results will not be as good, but the overall trend is exactly the same. Figure 6 - Single Ended Output Spectrum Above we see the spectrum of the single-ended waveform (the one shown in red in Figure 5), and both even and odd harmonics are evident. Despite claims to the contrary, the second (and other even-order) harmonics do not exist in isolation. They are accompanied by third and other odd-order harmonics, exactly as expected by measurement and/or simulation. Figure 7 - Push-Pull Output Spectrum Once the two asymmetrical signals are combined in the transformer (green trace in Figure 5), we see that the even-order harmonics are cancelled. The degree of cancellation depends on the output valves, and how well matched they are. Perfectly matched valves will give complete cancellation, but in reality there will always be some differences. Note that the levels of the odd-order harmonics are unchanged between Figures 6 and 7. It is important to understand that you will never see the asymmetrical waveform in a push-pull amplifier, because the transformer performs summing of the two distorted signals. You might be able to see the general trend if one output valve is removed, but the transformer core will then saturate due to the DC flowing in only one winding and the waveform will be very different from what you might expect. Remember that the goal of any high fidelity system is that it should neither remove nor add anything to the original. The least intrusive change is frequency response distortion. Frequencies (very low or very high) may be reduced, or the overall frequency response might be modified. These are forms of distortion that are generally easy enough to deal with by applying equalisation ... depending on the reason for the anomaly. Harmonic and intermodulation distortion are another matter altogether. Once a signal's waveshape has been distorted, it is (generally speaking) impossible to remove the additional frequencies that were generated. If you were to build a circuit that generated the exact opposite of the original distortion, you can actually 'undo' the distortion, but such a circuit is extraordinarily difficult to achieve. It has to replicate a perfect inverse of the original distortion, so that every harmonic and IMD product is generated with the exact opposite polarity so the two complete sets of unwanted frequencies will be cancelled. While this can be done easily enough in a simulator using ideal components (everything matched perfectly in all respects), it's a tad more difficult when you have a circuit that creates distortion that changes with age, temperature and whim. While it is certainly feasible, complex waveforms will be subjected to intermodulation ... twice. The original distorting circuit will add IMD, and the second 'anti-distortion' circuit will also add IMD, but with all signals of the opposite polarity. An 'anti-distortion' circuit for a single-ended valve power amp would be very complex indeed. Arranging the valves in push-pull is the simplest possible arrangement, but that only cancels even harmonics. Even so, the reduction of both harmonic and intermodulation distortion is very worthwhile, and not one of the high quality valve amps that were available at the end of the 'valve era' used a single-ended output stage ... not one ! All of these expensive (McIntosh, Quad, etc.) amps went to extremes to reduce distortion to the minimum possible. Were the designers of the day wrong? I certainly don't think so. Intermodulation is a function of all non-linear circuits, and is not negotiable. The supposedly 'nice' even order (single ended) distortion creates more intermodulation products than the 'nasty' odd-order distortion - as might be found in minuscule amounts from very high quality push-pull valve amps, opamps and transistor power amps. These days, the levels are so low as to be difficult to measure, and are well below the threshold of audibility. Everything shown or described above can be reproduced in any home lab quite easily - no data have been doctored in any way. Graphs and charts were formatted to match normal ESP styles, but the data are unchanged. All waveforms were produced using the SIMetrix simulator, and are easily reproduced. Most simulation packages will give very similar (if not identical) results. I urge anyone who might have the slightest doubt to do the test for themselves. Get hold of a couple of audio generators, resistors and diodes. Hook up the circuit as shown in Figure 1A - listen to the output results. I did, and also checked that the simulated FFT matches reality - my oscilloscope has FFT capabilities and the traces show clearly that the simulations are very close to reality. Some variations are to be expected, simply because exact values and frequencies are too time-consuming to try to achieve. It is critically important to understand that if two amplifiers sound different from each other (in a proper blind test), then one or both of them has a fault. There have been some extraordinarily good valve amps made, and without exception they are push-pull. Some of these amplifiers will be found to be virtually indistinguishable from a good transistor amp in a double-blind listening test. The extension of this is that if you can hear a difference between a valve amplifier (of any topology) and a good quality transistor amp, the valve amp is obviously making changes to the signal that the transistor amp is not. The most common change is distortion, although frequency response is often wobbly if the amp's output impedance is non-zero. Frequency response is easily corrected with equalisation, but harmonic and intermodulation distortion cannot be undone with real-life circuits. While it might be possible with some extremely clever DSP (digital signal processor) programming, there are literally countless modern amplifiers available that already have distortion and intermodulation levels that are well below the threshold of audibility, with many close to the limits of measurement equipment. If it turns out that you like harmonic and intermodulation distortion, and prefer to listen to your music with this distortion, then far be it for me to deny you this pleasure. I only ask that you don't claim that it's hi-fi, and don't try to convince me or anyone else that we should share your passion. No-one would deny that guitar amps are a special case, and that distortion is not just a fact of life but a requirement. Since the vast majority of guitar amps and distortion pedals feature symmetrical clipping, it's difficult to understand the basis for claims of 'second harmonic' distortion. Comparatively low order distortion is common, but even-order distortion in isolation is not only uncommon but exceptionally difficult to achieve (it's close to impossible with most electronic circuits!). As noted earlier, make sure that you read Intermodulation - Something New To Ponder, as the analysis of asymmetrical vs. symmetrical distortion is covered in greater depth, and has additional resources so you can prove it to yourself.
https://sound-au.com/valves/thd-imd.html
Abstract: Mitochondria are responsible for aerobic respiration and large-scale ATP production in almost all cells of the body. Their function is decreased in many neurodegenerative and cardiovascular disease states, in metabolic disorders such as type II diabetes and obesity, and as a normal component of aging. Disuse of skeletal muscle from immobilization or unloading triggers alterations of mitochondrial density and activity. Resultant mitochondrial dysfunction after paralysis, which precedes muscle atrophy, may augment subsequent release of reactive oxygen species leading to protein ubiquitination and degradation. Spinal cord injury is a unique form of disuse atrophy as there is a complete or partial disruption in tonic communication between the central nervous system (CNS) and skeletal muscle. Paralysis, unloading and disruption of CNS communication result in a rapid decline in skeletal muscle function and metabolic status with disruption in activity of peroxisome-proliferator-activated receptor-gamma co-activator 1 alpha and calcineurin, key regulators of mitochondrial health and function. External interventions, both acute and chronical with training using body-weight-assisted treadmill training or electrical stimulation have consistently demonstrated adaptations in skeletal muscle mitochondria, and expression of the genes and proteins required for mitochondrial oxidation of fats and carbohydrates to ATP, water, and carbon dioxide. The purpose of this mini-review is to highlight our current understanding as to how paralysis mechanistically triggers downstream regulation in mitochondrial density and activity and to discuss how mitochondrial dysfunction may contribute to skeletal muscle atrophy.
http://bioblast.at/index.php/Gorgey_2018_Eur_J_Appl_Physiol
This review reports on the effects of hypoxia on human skeletal muscle tissue. It was hypothesized in early reports that chronic hypoxia, as the main physiological stress during exposure to altitude, per se might positively affect muscle oxidative capacity and capillarity. However, it is now established that sustained exposure to severe hypoxia has detrimental effects on muscle structure. Short-term effects on skeletal muscle structure can readily be observed after 2 months of acute exposure of lowlanders to severe hypoxia, e.g. during typical mountaineering expeditions to the Himalayas. The full range of phenotypic malleability of muscle tissue is demonstrated in people living permanently at high altitude (e.g. at La Paz, 3600–4000m). In addition, there is some evidence for genetic adaptations to hypoxia in high-altitude populations such as Tibetans and Quechuas, who have been exposed to altitudes in excess of 3500m for thousands of generations. The hallmark of muscle adaptation to hypoxia in all these cases is a decrease in muscle oxidative capacity concomitant with a decrease in aerobic work capacity. It is thought that local tissue hypoxia is an important adaptive stress for muscle tissue in exercise training, so these results seem contra-intuitive. Studies have therefore been conducted in which subjects were exposed to hypoxia only during exercise sessions. In this situation, the potentially negative effects of permanent hypoxic exposure and other confounding variables related to exposure to high altitude could be avoided. Training in hypoxia results, at the molecular level, in an upregulation of the regulatory subunit of hypoxia-inducible factor-1 (HIF-1). Possibly as a consequence of this upregulation of HIF-1, the levels mRNAs for myoglobin, for vascular endothelial growth factor and for glycolytic enzymes, such as phosphofructokinase, together with mitochondrial and capillary densities, increased in a hypoxia-dependent manner. Functional analyses revealed positive effects on V̇O2max (when measured at altitude) on maximal power output and on lean body mass. In addition to the positive effects of hypoxia training on athletic performance, there is some recent indication that hypoxia training has a positive effect on the risk factors for cardiovascular disease. Introduction The purpose of the following review is to describe the structural adaptations of skeletal muscle tissue in humans in response to temporary or permanent exposure to altitude. Altitude, and in particular the hypoxia that poses its main physiological challenge, can be seen as a stressful environmental condition to which organisms have a capacity to respond by adaptation (Bligh and Johnson, 1973). In the context of this review, we will explore the phenotypic plasticity of skeletal muscle tissue with regard to altitude (acclimatization). Limited evidence is presented that, in some high-altitude populations, some adaptations to altitude may have become genetically fixed. In the context of altitude research, the term acclimation is used to describe phenotypic alterations that are the response to simulated as opposed to real exposure to high altitude (Banchero, 1987). This review will also cover results of acclimation studies in which hypoxia was used during exercise training sessions in humans with the aim of improving athletic performance. The first report on adaptations of muscle tissue to hypoxia, notably in humans, is the landmark paper of Reynafarje (Reynafarje, 1962), who found oxidative capacity and myoglobin concentration to be elevated in biopsies of sartorius muscle from permanent high-altitude (4400m) residents compared with sea-level dwellers. Before that, Valdivia (Valdivia, 1958) presented evidence for a significantly increased (by approximately 30%) capillary supply to skeletal muscle tissue in guinea pigs native to the Andes compared with animals raised at sea level. The data presented in these papers influenced the way in which physiologists thought about the effects of hypoxia on muscle tissue and, as a consequence, also influenced the design of experiments for almost 30 years. Hochachka et al. (Hochachka et al., 1983) condensed the concepts with which high-altitude adaptations in muscle tissue were discussed in what he called an interpretive hypothesis. He defined as the key problem of the organism in hypoxia: ‘to maintain an acceptable high scope for aerobic metabolism in the face of the reduced oxygen availability in the atmosphere’. According to this hypothesis, this is achieved by increasing the activities of oxidative enzymes to augment the maximum flux capacity of aerobic metabolism. Supporting a larger oxidative capacity would, in turn, necessitate adaptations of the oxygen-transfer system such as an increased capillarity, shorter diffusion distances and a higher myoglobin concentration in muscle. Hochachka et al. (Hochachka et al., 1983) were able to explain the observations of the classical papers and many subsequent studies within a coherent conceptual framework. However, doubt was cast on this unifying view of hypoxia adaptations by the review of Banchero (Banchero, 1987). Reporting on animal studies with hypoxia exposures of more than 2 weeks, he came to the conclusion that skeletal muscle capillarity does not respond to normothermic hypoxia even when the muscle is active. He criticized most hypoxia studies on the grounds that they did not control for activity or temperature and that both these factors could be major determinants of muscle adaptive events during hypoxia. It is unclear at present to what extent species differences in responses to hypoxia might be responsible for divergent experimental outcomes when the effects of hypoxia are compared across species. Muscle structure in lowlanders exposed to acute hypoxia for up to 2 months A dramatic and consistent consequence of severe altitude exposure, such as during an expedition to the Himalayas (Fig.1), is a loss of body mass (typically between 5 and 10%) and a similar loss of muscle volume (Hoppeler et al., 1990). These decreases in muscle and body mass do not seem to be due to malabsorption (Kayser et al., 1992) and may be circumvented when optimal housing and nutritional conditions are provided at altitude (Kayser et al., 1993). Concomitant with the decrease in muscle volume, we found a reduction in muscle fibre cross-sectional area in the vastus lateralis muscle of 20% in 14 mountaineers after 8 weeks at altitudes above 5000m (Hoppeler et al., 1990). Similar reductions of 25 and 26% for type II and type I fibres, respectively, were reported for Operation Everest II, an experiment in which subjects were exposed to simulated extreme altitude (MacDougall et al., 1991). However, there is no evidence for fibre type transformations in response to hypoxia exposure in humans (Green et al., 1989). Capillary density is found to be increased by 9–12% in human biopsy studies (Green et al., 1989; Hoppeler et al., 1990; MacDougall et al., 1991). The capillary-to-fibre ratio remains unchanged, arguing against capillary neoformation in humans exposed to hypoxia, e.g. during typical expedition conditions. The observed increase in capillary density can therefore be attributed entirely to the reduction in muscle cross-sectional area that is a consequence of muscle fibre atrophy. With regard to oxygen diffusion, however, the situation under given conditions is improved because the same capillary bed (i.e. an identical total capillary length) serves a smaller muscle volume. Muscle oxidative capacity is found to be moderately reduced by acute exposure to altitude. In seven subjects on a Swiss Himalayan expedition, citrate synthase and cytochrome oxidase activities were reduced by just over 20% after return to sea level (Howald et al., 1990). Similar decreases in succinate dehydrogenase and hexokinase activities were reported for five subjects on Operation Everest II (Green et al., 1989; MacDougall et al., 1991). Looking at 14 mountaineers after return to sea level, we found a decrease in the volume density of mitochondria of close to 20% (Hoppeler et al., 1990). In addition, we found that the subsarcolemmal population of mitochondria was reduced significantly more (reduced by 43%) than the interfibrillar population of mitochondria (reduced by 13%). The interfibrillar mitochondria make up much the largest fraction of the total mitochondrial population. The significance of this finding is unclear at present. Looking at the individual data for the 14 mountaineers, the subjects with the highest pre-expedition mitochondrial volume densities suffered the greatest decreases in muscle oxidative capacity. The extent to which a reduction in habitual activity from pre-expedition conditions might have contributed to this result remains open. To appreciate the total extent of the morphological changes in skeletal muscle as a consequence of a ‘typical’ expedition, we have to consider both the reduction in muscle volume and the reduction in oxidative capacity of muscle fibres. The total loss of mitochondria seems to be of the order of 30%, while the total length of the capillary bed is maintained. From this, we can conclude that the oxygen supply situation for the remaining mitochondria should be improved. In addition to changes in the structures related to oxygen supply and oxygen utilization, we noted a threefold increase in lipofuscin levels after exposure to high altitude (Martinelli et al., 1990). Lipofuscin is a degradation product formed by peroxidation of lipid and is indicative of muscle fibre damage (Fig.2). The same study also found evidence for muscle regeneration: the volume density of satellite cells, but not of myonuclei, increased significantly upon return from the expedition. From studies of acute hypoxia exposure of lowlanders, whether in real or simulated ascents to the peaks of the Himalayas, there are some key structural findings that can explain at least some of the functional observations. In particular, the reduction in the maximal rate of oxygen uptake,V̇O2max, after a prolonged exposure to hypoxia can probably be attributed to the combined reduction in muscle cross-sectional area and in muscle oxidative capacity. The qualitative evidence further supports the idea that hypoxia, such as during a typical expedition to the Himalayas, is detrimental to muscle tissue and, hence, to muscle performance capacity. We proposed a direct effect of hypoxia on protein synthesis (Cerretelli and Hoppeler, 1996). In view of the relatively high oxygen costs of protein synthesis, this remains a plausible, but not directly tested, hypothesis (Hochachka et al., 1996). Muscle structure and aerobic work capacity in permanent high-altitude residents As discussed in the previous paragraph, acute exposure of lowlanders to high altitude for up to 2 months did not cause the ‘favourable’ adaptations of skeletal muscle tissue, such as a larger capacity for oxygen use and delivery, that had been expected. It was therefore of interest to study the skeletal muscle tissue of high-altitude residents, i.e. of people who could be expected to show complete acclimatization because they had grown up at altitude and were living permanently in hypoxic conditions. We studied 20 young residents of La Paz of mixed ethnic origin (students at the University of La Paz; altitude 3600–4000m) before and after 6 weeks of endurance exercise training in local hypoxia or with supplemental oxygen (Desplanches et al., 1996). At the outset of the study, muscle fibre type composition (43% type I, 34% type IIA, 4% type IIAB and 19% type IIB fibres) was not notably different from the fibre type composition of lowlanders with a similar activity level, as reported in many studies. Fibre cross-sectional area (3500μm2) was found to be normal or slightly reduced compared with that of lowlanders but commensurate with the somewhat lower body mass of these highlanders. Before training, their muscle oxidative capacity, estimated from the total volume density of mitochondria (3.94%), was smaller by at least 30% than what would have been expected for untrained young lowlanders (Hoppeler, 1986). Surprisingly, capillary-to-fibre ratio (1.4) and capillary density (404mm2) were also considerably lower than in a comparable lowland population. This study therefore indicated a clear reduction in muscle oxidative capacity together with a commensurate reduction in capillarity in untrained permanent high-altitude residents. After 6 weeks of endurance training, both muscle oxidative capacity and capillarity increased significantly, and the increases were independent of whether training was carried out in hypoxia or normoxia. Moreover, the relative increases in V̇O2max, in the volume density of muscle mitochondria and in muscle capillarity were similar to those observed in lowland training studies of identical duration and intensity (Hoppeler et al., 1985). These result therefore support the idea that the higher muscle oxidative capacity observed in highlanders than in lowlanders in the classical study of Reynafarje (Reynafarje, 1962) must be attributable to the difference in training status of the two populations studied he studied (B. Saltin, personal communication). A further finding of note is the low content of intracellular lipid droplets (IMCLs, intramyocellular lipids) in muscle biopsies from permanent high-altitude residents. We found on average less than half the IMCL content (0.2% of muscle fibre volume) of a comparable lowland population (0.5%). Whereas lowlanders typically double their IMCL content under similar training conditions (Hoppeler et al., 1985), high-altitude residents did not increase the lipid content of their muscle fibres significantly. Large intracellular lipid deposits are observed in endurance athletes, in particular in athletes competing in events lasting several hours, such as cyclists (Hoppeler, 1986), and may also be induced by consuming a high-fat diet (Hoppeler et al., 1999). These deposits are thought to be of advantage and to be related to the higher reliance of trained muscle on lipids as a substrate for mitochondrial respiration. Low IMCL contents are compatible with the contention that permanent sojourn at high altitude induces a shift in muscle metabolism towards a preferred reliance on carbohydrates as substrate (Hochachka et al., 1996). In conclusion, we found that permanent high-altitude residents have an unremarkable fibre type composition, slightly reduced fibre cross-sectional area, remarkably low oxidative capacities with a capillary supply reduced in proportion and low intracellular lipid stores. These acclimatory features place high-altitude residents at a distinct disadvantage when exposed to sea-level conditions. Favier et al. (Favier et al., 1995) reported that the V̇O2max of this high-altitude population increased by only 8.2% when a V̇O2max test was carried out in acute hypobaric normoxia (supplementing the inspired air with oxygen). The data compiled by Cerretelli and Hoppeler (Cerretelli and Hoppeler, 1996) indicate that the V̇O2max of a lowlander would increase by 20–25% when tested under similar experimental conditions. Muscle structure in Sherpas and Quechuas The possible effects of phylogenetic adaptation (Hochachka and Somero, 1984) to hypoxia can be studied on populations that have been living at high altitude for thousands of generations, such as Tibetans and Quechas. It has been proposed from a phylogenetic analysis, assuming a species life of 100000 years for humans, that for a third of that time the Himalayan highlanders and the Andean highlanders did not share common ancestors (Hochachka et al., 1998). Thus, the hypoxia defence mechanisms observed in these two populations arose independently and by positive selection. These authors mention five response systems in which similar traits have evolved in both populations; a blunted hypoxic ventilatory response, a blunted pulmonary vasoconstrictor response, an upregulation of expression of vascular endothelial growth factor, an upregulation of expression of erythropoietin in the kidney and regulatory adjustments of metabolic pathways in skeletal muscle tissue. In the context of the present review, we will concentrate mainly on the latter. Tibetans (Kayser et al., 1991; Kayser et al., 1996) and Quechas (Rosser and Hochachka, 1993) have a (small) preponderance of slow type I fibres in vastus lateralis muscle (i.e. close to 60% in Tibetans and 68% in Quechuas; N=3) compared with approximately 50% in typical lowland populations. There is also a tendency for the highland population to have estimates of fibre cross-sectional area at the low end of the normoxic spectrum. Muscle oxidative capacity, measured as mitochondrial volume density, is reduced in Tibetans (3.96%), but their capillary density is within the normal range (467mm2) (Kayser et al., 1991). Comparing second-generation Tibetans (refugees, born and raised in Katmandu, Nepal, 1300m) with lowland Nepalese living in the same city, significantly lower volume densities of mitochondria and similarly reduced citrate synthase activities together with lower intramyocellular lipid concentrations and lower 3-hydroxy-acyl-CoA dehydrogenase activities were noted for Tibetans. Moreover, Tibetans had slightly, but significantly, estimates of reduced fibre cross-sectional area. These Tibetans were never exposed to the altitudes at which their ancestors lived (3000–4500m), emphasizing a hereditary component to these differences. Together, these adaptations in Tibetans and Quechuas were interpreted as a downregulation of maximum aerobic and anaerobic exercise capacities with a concomitant upregulation of oxidative compared with glycolytic contributions to energy supply (Hochachka et al., 1998). The preponderance of type I fibres is taken as favouring a tighter coupling between ATP demand and ATP supply, reduced lactate accumulation and improved endurance under submaximal conditions. To this, one might add that the shift away from lipid substrates would further optimize the amount of ATP produced per litre of oxygen consumed. Muscle structure with training in intermittent hypoxia, acclimation studies Since the seminal paper on muscle tissue of permanent high-altitude residents (Reynafarje, 1962), it had tacitly or openly been assumed that one of the key factors modulating the response of muscle tissue to exercise was local tissue hypoxia. As the evidence from acute and permanent exposure to hypoxia in humans indicated otherwise, it became necessary to design experiments in which a potential hypoxia stimulus could be dissociated from the (negative) effects of permanent exposure to altitude. One way to achieve such a dissociation is to make subjects exercise in hypoxia but to keep them in normoxia for the remainder of the day (intermittent hypoxia exposure). Studies using these protocols encounter the problem of standardization of absolute versus relative exercise intensities at different levels of hypoxia. The general consensus seems to be that the use of hypoxia produces training effects similar to, but not identical with, the effects seen after training in normoxia, but with small advantages (and no disadvantages) of hypoxia compared with normoxia training depending on the training conditions and the fitness of the subjects (Bailey et al., 2000; Desplanches et al., 1993; Emonson et al., 1997; Melissa et al., 1997; Terrados et al., 1988; Terrados et al., 1990). Training competitive cyclists in a hypobaric chamber (2300m) for 3–4 weeks, Terrados et al. (Terrados et al., 1988) noted that work capacity at altitude was improved more after training at altitude than after sea-level training. Terrados et al. (Terrados et al., 1990) had subjects training one leg in a hypobaric chamber (2300m) and the other leg under sea-level conditions (4 weeks, 3–4 training sessions of 30min per day, at the same absolute training intensity of 65% of pre-training exercise capacity). In the hypobaric-trained leg, time to fatigue was improved significantly more than in the normobaric-trained leg. Moreover, the hypobaric-trained leg showed a significantly larger increase in citrate synthase activity and an increase in myoglobin concentration. Another study using single-leg training at a simulated altitude of 3300m also showed larger increases in citrate synthase activity in the hypoxia-trained than in the normoxia-trained leg, but no significant differences in the improvements in muscle function (Melissa et al., 1997). Desplanches et al. (Desplanches et al., 1993) trained subjects for 3 weeks, (2h per day, 6 days per week) at simulated altitudes up to 6000m or under sea-level conditions and found that V̇O2max improved in hypoxia-trained subjects, but only when measured in hypoxia. The muscle structural adaptations were similar in hypoxia and normoxia training except for an increase in muscle volume and muscle fibre cross-sectional area, which were observed only with hypoxia training. Taken together, these studies are compatible with the generally held idea that altitude training is of advantage for competition at altitude (Cerretelli and Hoppeler, 1996). In addition, they suggest that training in hypoxia could have specific effects on muscle tissue not seen with training of similar intensity in normoxia. However, the data currently available do not indicate which altitude or which training protocol is optimal for improving athletic performance capacity. To test the hypothesis of a specific response of skeletal muscle to hypoxia training, in particular a possible involvement of hypoxia-inducible factor 1 (HIF-1), we used training programmes in which hypoxia was present only during the training sessions (Vogt et al., 2001; Geiser et al., 2001). Four groups of subjects were set up, two of these trained under normoxic and two under hypoxic conditions (corresponding to an altitude of 3850m) for 30min, five times a week for a total of 6 weeks on a bicycle ergometer. From each of the two oxotensic groups, one trained at a high intensity, corresponding to the anaerobic threshold, and the other some 25% below this level. Muscle biopsies were taken from the vastus lateralis muscle before and after the training period and analyzed morphometrically and for changes in mRNA levels of proteins potentially implicated in the response to hypoxia. In vitro experiments had revealed that HIF-1, which is involved in oxygen sensing in mammalian cells, including skeletal muscle, is specifically activated by hypoxia (Wenger, 2000; Semenza, 2000). The transcription factor HIF-1 targets genes coding for proteins involved in oxygen transport (erythropoietin and vascular endothelial growth factor, VEGF) as well as genes coding for glycolytic enzymes and glucose transporters. In our experiments, mRNA levels of the regulatory subunit of HIF-1 increased after training under hypoxic conditions irrespective of training intensity, but not after training in normoxia (Fig.3A; Hyp-high, +82%, P<0.10; Hyp-low, +58%, P<0.05). To us this suggests a specific molecular response to the hypoxic stimulus. Only high-intensity training in hypoxia increased mRNAs coding for myoglobin and VEGF (Fig.3C,D). The higher level of the VEGF mRNA was reflected by a parallel increase in capillary density (Fig.4B). Furthermore, we detected increases in levels of mRNA coding for phophofructokinase, which is involved in the glycolytic pathway and is an established HIF downstream gene (Fig.3C), and in mitochondrial volume density (Fig.4A) after high-intensity training in both hypoxia and normoxia. Both changes in phosphofructokinase mRNA (Fig.3C) and mitochondrial volume density (Fig.4) were larger with the hypoxic stimulus (ANOVA). Taken together, these results support the involvement of HIF-1 in the regulation of adaptation processes in skeletal muscle tissue after training in hypoxia. Our results suggest that high-intensity training in hypoxia leads to adaptations that compensate for the reduced availability of oxygen during training. A high capillarity facilitates the supply of oxygen and substrates to muscle cells. The higher concentration of myoglobin could improve the capacity for storing and transporting oxygen within muscle cells. Finally, by inducing metabolic pathways that favour the use of carbohydrates instead of lipids as substrate (upregulation of glycolytic and oxidative pathways), oxygen would be used more efficiently. With regard to the increase in muscle myoglobin concentration demonstrated in humans only after high-intensity training in hypoxia, we hypothesize that this could be of advantage to exercise performance at altitude. Endurance training, sprint training and resistance training in normoxia failed to induce changes in muscle myoglobin concentration in humans (Jansson et al., 1982; Svedenhag et al., 1983; Jacobs et al., 1987; Hickson, 1981; Harms and Hickson, 1983; Masuda et al., 1999). In elite cyclists, there is a high correlation between cycling performance and muscle myoglobin content (Faria, 1992). A high myoglobin concentration could facilitate oxygen supply under hypoxic conditions when training or competing at altitude. We have tested the functional consequence of intermittent hypoxia training in several studies with untrained subjects, endurance-trained athletes and elite alpine ski racers (Vogt, 1999; Vogt et al., 1999). Overall, our results indicate an increase in V̇O2max, an increase in maximal power output, an increase in maximal ventilatory response and an improvement in the rating of perceived exertion, particularly when these variables are measured at altitude. In addition to the effects of the hypoxic stimuli on exercise performance, there is recent evidence that intermittent hypoxia training might have clinical implications. Bailey et al. (Bailey et al., 2000) trained physically active subjects either in normoxia or in normobaric hypoxia (fractional O2 content 16%). After training both in normoxia and in hypoxia, concentrations of free fatty acids, total cholesterol, HDL-cholesterol and LDL-cholesterol were decreased. The concentration of homocysteine, an amino acid implicated in coronary disease, was reduced by 11% after hypoxia training only. Furthermore, maximal systolic blood pressure was reduced after hypoxia training, indicating a hypotensive effect of hypoxia training, possibly mediated by morphological changes in the endothelium. From these results, the authors concluded that hypoxia training might be beneficial for patients with cardiovascular diseases. Taken together, intermittent hypoxia training (‘living low – training high’) has been shown to elicit specific molecular responses in skeletal muscle tissue. It is likely that this type of training has the potential to produce a (small) increase in muscle mass not seen in response to normoxia training. It is not presently possible to make specific recommendations as to the best protocols to be used (in terms of intensity, duration and training altitude) to improve performance at altitude. The increase in muscle myoglobin content (Reynafarje, 1962) and capillarity may at least partly explain why functional improvements after hypoxia training are more pronounced under hypoxic testing conditions (altitude specificity of training). Intermittent hypoxia training can be considered to be complementary to training schemes in which (mild) hypoxia is applied over hours with the intention of increasing the aerobic performance capacity by increasing haemoglobin concentration (‘living high – training low’). The disadvantage of intermittent hypoxia training is the requirement for technical installations to simulate appropriate altitude conditions either by diluting environmental air with nitrogen or by reducing atmospheric pressure. ACKNOWLEDGEMENTS This work was supported by the Swiss National Science Foundation, Swiss Olympics and the University of Bern. The secretarial help of L. Gfeller-Tüscher is gratefully acknowledged.
http://jeb.biologists.org/content/204/18/3133.full
In moderate to high intensity work, the raised glucose output is maintained by an accelerated liver glycogenolysis demonstrated by reduced liver glycogen. Mitochondria are the energy furnaces of the cells of the body, and it is essential for certain types of compounds, nutrients and minerals to be able to carried enzymatically in and out of the inner and outer membrane. According to John McLaren Howard of Acumen Laboratory, Aldehydes from lipid peroxidation when it occurs tend to accumulate in the mitochondrial membranes. This increases the buffering capability and lactate removal from the FTa cells. The use of the ventilatory thresholds, heart rate values, 15 Conconi test, lactate thresholds etc has brought about much work with varying results, some good, 19 some less so. It is manufactured intracellularly in the body from the Essential Amino Acids L-lysine and L-methionine, by a process of methylation i. The exercise to rest time ratio is less than 1: That is, a person who demonstrates greater aerobic fitness will have a higher level of circulatory insulin than a less aerobically trained athlete. Figure 2 18 presents the percentage contribution of each metabolic pathway during three different all-out cycle ergometer tests. Muscle oxygenation and ATP turnover when blood flow is impaired by due to impaired blood flow. Crotonaldehyde is a known irritant. The body is not able to produce energy as efficiently. The authors reported a half time for the fast recovery componentas21—22s,comparedwith sfortheslow Because the recovery is occurring in a rapidly chang- ing environment, the assumption that the rate of PCr resynthesis is described by a monoexponential model Eq. Athletes are able to perform high intensity training at lower velocities and thus produce less stress on the musculoskeletal system. The mechanisms that drive this response are not fully understood. The reversible phosphorylation of creatine nbsp; Bioenergetic systems — Wikipedia available to muscle cells is when ATP is broken down, energy is required to rebuild or resynthesize it. Due to the environmental differences at high altitude, it may be necessary to decrease the intensity of workouts. There are a number of possible organic acids that may not be converting properly in the Krebs Cycle, as can be seen above. Not all studies show a statistically significant increase in red blood cells from altitude training. PCr u Elevations in ADP have been reported to have an inhibitory effect on some to the relationship between oxygen uptake kinetics and changes in CrlPCr ratio. Free radicals are produced inside the mitochondria as a byproduct of energy production and respiration. They experience incomplete recoveries in hypoxic conditions. Because it is impossible for the monoexponential curve to over- or undershoot the level observed at rest, in this paper, we shall consider an alternative double- exponential model for PCr resynthesis, originally pro- posed by Harris et al. Training regimens[ edit ] Athletes or individuals who wish to gain a competitive edge for endurance events can take advantage of exercising at high altitude. Study 1, with the use of 31 stimulation with occlusion. This view was challenged by Bergstrom and developed further by Boobis et al. Ex Phys Final. STUDY. PLAY. Represents the potential effect of carnitine deficiency in muscle cell. Cytosol. Glycolysis take place in the. When a person starts moderate intensity exercise, his oxygen consumption with respect to his/her oxygn need will be. cytochrome oxidase. Phosphocreatine resynthesis during recovery in different muscles of the exercising leg by PMRS Article in Scandinavian Journal of Medicine and Science in Sports 23(5) · May with 91 Reads. Oct 01, · Phosphorylated guanidinoacetate partly compensates for the lack of phosphocreatine in skeletal muscle of mice lacking guanidinoacetate methyltransferase Hermien E Kan, 1 W Klaas Jan Renema, 1 Dirk Isbrandt, 2 and Arend Heerschap 1. Altitude training is the practice by some endurance athletes of training for several weeks at high altitude, preferably over 2, metres (8, ft) above sea level, though more commonly at intermediate altitudes due to the shortage of suitable high-altitude janettravellmd.com intermediate altitudes, the air still contains approximately % oxygen. May 26, · Phosphocreatine is known as its quickest form of regeneration, by means of the enzyme creatine kinase. Thus, the primary function of this system is to act as a temporal energy buffer. Nevertheless, over the years, several other functions were attributed to phosphocreatine. e) uses molecular oxygen to remove a molecule of carbon dioxide from pyruvate.
https://gejizijefiqepa.janettravellmd.com/is-phosphocreatine-resynthesis-inhibited-by-lack-of-oxygen-35096zz.html
Mitochondria turn food into energy for the body. But if they start to malfunction, free radicals can flood the cell, and a number of health problems might arise. Read about the symptoms of mitochondrial dysfunction and the diseases linked to it. Mitochondrial Dysfunction & Associated Diseases Recap: Why Mitochondria Are So Important Properly functioning mitochondria are central to health, as they are the main energy provider of the cell. However, reactive oxidative species produced by mitochondria accumulate over time, and oxidative stress leads to age-related diseases. Due to the vast role of mitochondria in the cell, mitochondrial dysfunction is linked to hundreds of diseases [1, 2, 3]. Additionally, a number of metabolic disorders are associated with genetic mutations in either mitochondrial or nuclear DNA. These mutations may be inherited or occur randomly . Note that while mitochondrial dysfunction has been observed in or linked to these conditions, it is not necessarily the cause (or even a cause). As such, strategies intended to improve mitochondrial function may or may not help manage these diseases. When in doubt, your doctor can help you understand the role of the mitochondria in your health. 1) Cancer Research Cancer cells require mitochondria to power the growth of tumors. Cancer cells tend to have an increased number of mitochondria to provide this energy. The mitochondrial increase is mediated independently by different transcription factors or proteins that initiate the production of specific genes . On the other hand, cancer cells increase the turnover of mitochondria that have accumulated free radicals. Oxidative stress is increased in cancer cells, which damages the surrounding tissue . One of the hallmarks of cancer is the ability of the cell to avoid programmed cell death (apoptosis). Normally, the mitochondria of healthy cells would trigger this process if the cell was replicating too much or too quickly. However, in cancer cells, programmed cell death is avoided by increasing the destruction of mitochondria that have accumulated free radicals. They also turn on antioxidant pathways so that oxidative stress does not trigger cell death [4, 1]. Mitochondria of cancer cells have lower levels of proteins that promote cell death (BAX/BAK) and/or higher levels of proteins that prevent cell death (BCL-2/BCL-XL) [4, 1]. The network of mitochondria in cancer cells is also different. Cancer cells have more fragmented mitochondria (by increasing mitochondrial division and decreasing fusion) . Cancer cells also produce energy differently through a process known as the Warburg effect. Energy production is largely done without oxygen (anaerobically), via glycolysis. This may be caused by turning down mitochondrial function to avoid apoptosis [4, 1]. Aerobic respiration through the Krebs cycle and oxidative phosphorylation still occur in cancer cells but to a lesser degree. 2) Neurodegenerative Diseases Mitochondrial dysfunction is a large cause of age-related neurodegenerative diseases like Alzheimer’s, ALS, and Parkinson’s disease. The accumulation of free radicals with age results in DNA and protein damage. Mitochondria accumulate defective proteins that cause loss of energy production and ultimately cell death . The mitochondria have been observed to behave differently (to be dysfunctional) in specific neurodegenerative diseases like Alzheimer’s and Parkinson’s. Alzheimer’s Disease - Amyloid beta proteins build up around the outer mitochondrial membrane. - This buildup decreases ATP production, increases oxidative stress, and ultimately causes cell death. - Amyloid beta increases mitochondrial protein production. - Mitochondrial enzymes have decreased activity, leading to reduced ATP levels. - Mitochondria undergo structural changes within the cell. Rather than existing in long tubes (mitochondrial fusion), the mitochondria are fragmented into little pieces within the cell (fission). This adds to the overall dysfunction of brain cells seen in patients with Alzheimer’s disease . Parkinson’s Disease - The hallmark of Parkinson’s is the accumulation of alpha-synuclein protein, leading to cell death and loss of neurons. - Patients with Parkinson’s accumulate this protein in the mitochondria, leading to increased oxidative stress and reduced energy production. - Parkin protein (E3 ubiquitin ligase) and PINK1 protein are responsible for marking damaged mitochondria for destruction. Patients with Parkinson’s disease have low levels of these proteins. - Damaged, low functioning mitochondria are not degraded and remain in the cell. - Alpha-synuclein continues to accumulate, leading to neurodegeneration . 3) Diabetes Improper mitochondrial function has been seen in patients with both type 1 and type 2 diabetes. Aside from having a lack of glucose for respiration, the network and shape of mitochondria in the cells of diabetic patients may be abnormal [6, 1]. In diabetic patients, the mitochondria are broken up into small fragmented networks (increased division, decreased fusion) throughout the cell. This has been observed in both type 1 and type 2 diabetes . Type 2 diabetes is characterized by insulin resistance in cells, reducing the amount of glucose available for respiration. This decreases ATP production and thus the energy available to the cell . It is unknown whether mitochondrial dysfunction is a cause of insulin resistance or a symptom of it. Some researchers have suggested that mitochondrial dysfunction could be a cause of insulin resistance, rather than a symptom . Patients with type 2 diabetes have reduced levels of mitochondrial proteins responsible for energy production [6, 1]. 4) Heart Failure Heart cells rely heavily on mitochondria to power the pumping of the heart. Mitochondrial dysfunction is implicated in heart failure due to the buildup of oxidative stress . Patients with heart failure exhibit reduced mitochondrial activity. The mitochondria have lower activity at electron transport chains. This is caused by a loss of oxygen supply to the mitochondria . Since oxygen supply is reduced, electrons at the electron transport chain cannot be picked up by oxygen. This leads to the accumulation of electrons, which produce free radicals . 5) Chronic Fatigue Syndrome Chronic Fatigue Syndrome, commonly known as CFS, is a controversial and lifelong disorder characterized by prolonged (over 6 months) symptoms of intense fatigue that can reduce a person’s ability to perform daily functions by over 50%. Patients with CFS suffer from a variety of other symptoms including : Although it was once thought to be a disease of the mind, increasing evidence points to mitochondrial dysfunction as one of the leading possible causes of this disorder . Multiple clinical trials have been conducted and have produced mixed results. According to some researchers, CFS may be linked to one or more of the following mitochondrial abnormalities : - Smaller mitochondrial shape and number - Lower L-carnitine, ALCAR, ubiquinone or CoQ10 levels - Reduced protein activity in the electron transport chain (oxidative phosphorylation) - Reduced ATP production A number of studies indicated that there were no significant differences in mitochondrial structure or function of healthy and normal patients. Further studies are required to fully understand the cause of this condition, the role of mitochondrial dysfunction and how to effectively treat patients. 6) Genetic Disorders Genetic mutations in mitochondrial genes can result in mitochondrial dysfunction through 1 or more of 5 distinct mechanisms : - Inability to utilize other molecules (substrates) for energy production - Improper Krebs cycle - Defective energy production through the electron transport chain (oxidative phosphorylation) - Defective transport of molecules in and out of the mitochondria - Defective proteins in the electron transport chain These defects can be caused by mutations in mitochondrial and/or nuclear DNA. Additionally, defects can occur when mitochondrial DNA is unable to communicate with nuclear DNA . Some metabolic diseases that are caused by mitochondrial and/or nuclear DNA mutations include [2, 1]: - mtDNA depletion syndrome (MDS): MDS refers to a group of disorders, any of which have dysfunctional mitochondrial DNA. This can result in different developmental, muscle, and brain abnormalities. Many known mitochondrial diseases are a result of MDS. - Mitochondrial myopathy: a Mitochondrial disease that causes muscle problems such as weakness, exercise intolerance, breathing difficulties, or issues with vision. - Mitochondrial encephalomyopathy, lactic acidosis, and stroke-like episodes (MELAS): A mitochondrial disorder that affects the brain and muscle throughout the body. Stroke-like episodes and the buildup of lactic acid results in dementia, vomiting, extreme pain, and muscle weakness. - CoQ10 deficiency: A deficiency in coenzyme Q10, a protein that is part of the electron transport chain. - Mitochondrial neurogastrointestinal encephalomyopathy (MNGIE): A rare mitochondrial disease that primarily affects the brain and digestive system. The muscles and nerves of the digestive system do not properly push food through the system. - Mitochondrial diabetes: Referred to as maternally inherited diabetes and deafness (MIDD), a subtype of diabetes caused by a single mutation in the mitochondrial DNA (at position 3243). The disease results in loss of hearing and diabetes similar to type 1. - POLG mutations: A gene that codes for DNA polymerase subunit gamma, the active (catalytic) part of the mitochondrial protein responsible for DNA synthesis. Mutations in this gene lead to the defective production of mitochondrial proteins and many mitochondrial diseases. The Mitochondrial Bottleneck Effect Carrying mutations in mitochondrial DNA does not necessarily mean that you will transmit the disease. According to some researchers, the proportion of cells carrying mutated mitochondria may have to be significantly higher than the cells carrying healthy mitochondria in order for the disease to present symptoms . During female egg cell (oocyte) production, each egg cell will carry a random selection of mitochondrial DNA copies. Some of the copies may carry mutations, whereas some may be completely normal. As the egg cell matures and prepares for fertilization, many mitochondria are replicated at random. This may dilute the chance of inheriting mutant mitochondria . This phenomenon means that if the mother carries highly mutant mitochondria, her offspring will not necessarily carry the trait . If enough cells carry mutant mitochondrial DNA – for example, more than 50% of the cells in the body – it is likely that the child will have the associated disorder. Preventing Inheritance of Dysfunctional Mitochondrial DNA (mtDNA) New technologies can prevent the transmission of mutated mitochondrial DNA from the mother to the offspring. A new technique, called 3-parent in vitro fertilization, consists of switching nuclear DNA from the mother with the nuclear DNA of a donor female egg that has healthy mitochondrial DNA. Therefore, the donor egg carries the genetic information from the mother but lacks the mutated mitochondrial DNA that she carries as well. Using in vitro fertilization, the egg is then artificially inseminated using the paternal semen. The fertilized egg is then reintroduced into the mother’s uterus, where it can latch on to the uterus . The offspring will have all of the physical characteristics of their biological parents because the nuclear DNA is unchanged. The only thing that is different is that this child will have proper mitochondrial function, unlike the mother who carries mutated copies of the mitochondrial DNA . Possible Signs & Symptoms of Mitochondrial Dysfunction Some researchers have identified possible signs that the mitochondria are not functioning as they should. These include: - Feeling excessively tired - Inability to exercise for long periods of time - Shortness of breath, especially during exercise [14, 15] - Poor bone growth and health - Difficulty controlling movements, balance, and coordination (ataxia) - Difficulty walking or talking - Muscle weakness and pain - Heart muscle disease (cardiomyopathy) - Gut and digestive issues - Liver and kidney disease - Droopy eyelids, vision loss, and other eye problems - Diabetes and other hormonal disorders [6, 24] - Trouble hearing - Migraines, strokes, and seizures [26, 27] - Difficulty remembering things - Developmental delay - Autism - Recurrent infections [30, 31] Note that these symptoms may have many causes other than mitochondrial dysfunction. In fact, they are more likely to be associated with another diagnosable and treatable health problem, which your doctor can identify and address. If you are currently suffering from symptoms such as these, and they are not currently being addressed, we strongly recommend talking to a doctor to determine the best treatment or management plan for your health. Takeaway Mitochondria produce energy (ATP), recycle parts of cells that can be reused, and remove cells that are old and damaged beyond repair. But since the mitochondria use oxygen, an excess of their byproducts can cause oxidative stress. When mitochondria break down, free radicals and cellular waste can flood cells and cause harm. Mutations in mitochondrial genes can result in mitochondrial dysfunction and an array of metabolic diseases. Mitochondrial dysfunction is also implicated in neurodegenerative diseases like Alzheimer’s diseases and chronic diseases like diabetes and heart failure. The symptoms of mitochondrial dysfunction can greatly vary from person to person and may include fatigue, shortness of breath, coordination issues, and neurological problems, among others.
https://selfhacked.com/blog/mitochondrial-dysfunction-disease/
October 18, 2018 You are electric. Every millisecond hundreds of thousands of tiny cellular constituents called mitochondria are pumping protons across a membrane to generate electric charges that are each equivalent to the power, over a few nanometers, of a bolt of lighting. And when you consider energy in general, your body, gram for gram, is generating 10,000 times more energy than the sun, even when you're sitting comfortably. There's an average of 300 to 400 of these often ignored energy-producing cellular "organs" in every cell – roughly 10 million billion in your body. If you were to somehow pile them together and put them on a scale, these mitochondria would constitute roughly 10% of your bodyweight. It's even more remarkable when you consider that they have their own DNA and reproduce independently. That's right, they're not even part of you. They're actually alien life forms, free-living bacteria that adapted to life inside larger cells some two billion years ago. But they're not parasitic by any means. Biologically speaking, they're symbionts, and in their absence, you could hardly move a muscle or undergo any of thousands of biological functions. In a broad sense, mitochondria have shaped human existence. Not only do they play a huge role in energy production, sex and fertility, but also in aging and death. If you could somehow influence them, you could theoretically double your lifespan without any of the diseases typically associated with old age. You could avoid metabolic diseases like syndrome X that afflict some 47 million Americans and simultaneously retain the energy of youth well into codger-dom. From an athletic perspective, controlling the vitality and number of mitochondria in your muscle cells could lead to huge improvements in strength endurance that didn't decline with the passing of years. Luckily, I'm not just teasing you with things that might someday happen. Controlling mitochondria is within our grasp, right now. But before we discuss how they affect muscle strength and endurance, we need to look at some really mind-blowing stuff that will be the crux of tons of scientific research and innovation in the years to come. Mitochondria are tiny organelles, which, as you can tell by the word, are kind of like teeny-tiny organs and like organs, they each have specific functions, in this case the production of energy in the form of ATP, the energy currency of the cell. They do this by metabolizing sugars, fats, and other chemicals with the assistance of oxygen. (Every time you take creatine, you're in a sense "feeding" your mitochondria. Creatine is transported directly into the cell where it's combined with a phosphate group to form phosphocreatine, which is stored for later use. When energy is required, the phosphocreatine molecule lets go of it phosphate group and it combines with an ADP molecule to form ATP.) A cell can have one lonely mitochondria or as many as hundreds of thousands, depending on its energy needs. Metabolically active cells like liver, kidney, heart, brain, and muscle have so many that they may make up 40% of the cell, whereas other slacker cells like blood and skin have very few. Even sperm cells have mitochondria, but they're all stored in the flagellating tail. As soon as the sperm cell hits its target, the egg cell, the tail plunks off into the deep ocean of prostatic fluid. That means that only the mother's mitochondria are passed on to offspring. This is done with such unfailing precision that we can track mitochondrial genes back almost 190,000 years to one woman in Africa who's been affectionately named "Mitochondrial Eve." Biologists have even postulated that this particular phenomenon is the reason why there are two sexes instead of just one. One sex must specialize to pass on mitochondria in the egg whereas the other must specialize in not passing them on. The commonly held assumption of aging is that as the years go by, we get more and more rickety until, finally, some part or parts break down beyond repair and we up and die. The popular reasons include wear and tear or the unraveling of telomeres – those nucleotide sequences at the end of genes that are said to determine how many times a cell can replicate. In the case of generic wear and tear, it doesn't seem to bear up to scrutiny because different species accumulate wear and tear at different rates, and as far as telomere theory, their degradation among different species displays just too much divergence to pass the smell test. Others say it's because of a drop in GH or a decline in the abilities of the immune system, but why the heck do they drop in the first place? What we need to do is look at those individuals or species that don't seem to suffer from the normal signs of aging. The oldest among us, those rare centenarians that appear on morning talk shows every so often boasting about eating bacon and liquoring it up every day, seem to be less prone to degenerative disease than the rest of us. They end up dying from muscle wastage rather than any specific illness. Similarly, birds rarely suffer from any degenerative diseases as they age. More often, they fly around as they always have until one day their power of flight fails and they crash land ignominiously into a drainage ditch. The answer to both the centenarians' and the birds' long, disease-free life seems to lie with the mitochondria. In both cases, their mitochondria leak fewer free radicals. This is important because mitochondria often determine whether a cell lives or dies, and this is dependent on the location of a single molecule – cytochrome C. Any one of a number of factors, including UV radiation, toxins, heat, cold, infections, or pollutants can compel a cell to commit suicide, or apoptosis, but the unrestricted flow of free radicals is what we're concerned with here. The underlying principle is this: depolarization of the mitochondrial inner membrane – through some sort of stress, either external or internal – causes free radicals to be generated. These free radicals release cytochrome C into the cellular fluid, which sets into motion a cascade of enzymes that slice up and dispose of the cell. This observation led to the popular theory of mitochondrial aging that surfaced in 1972. Dr. Denham Harman, the "father" of free radicals, observed that mitochondria are the main source of free radicals and that they're destructive and attack various components of the cell. If enough cells commit apoptosis enough times, it's like a butcher slicing up a pound of salami. The liver, the kidneys, the brain, immune system cells, even the heart, lose mass and effectiveness slice by slice. Hence the diseases of aging. Dr. Harman is why practically every food on the market today boasts about its antioxidant power. The trouble is, Dr. Harman appears to have been wrong, at least partially. For one thing, it's hard to target the mitochondria with antioxidant foods. It might be the wrong dosage, the wrong timing, or even the wrong antioxidant. Moreover, it seems that if you completely turn off free radical leakage in the mitochondria, the cell commits suicide. Hardly the effect we're looking for. (That's not to say that ingesting antioxidants isn't good for you, but it's important to realize that this endless, single-minded pursuit of higher and higher antioxidant-containing foods might not do much to prolong life.) Free radicals, it seems, in addition to telling the cell when to commit suicide, also fine tune respiration, otherwise known as the production of ATP. They're involved in a sensitive feedback loop, telling the mitochondria to make compensatory changes in performance. However, if you completely shut off or slow down free radical production too much through external methods like an antioxidant diet or drugs, the membrane potential of the mitochondria collapses and it spills apoptotic proteins into the cell. If a larger number of mitochondria do this, the cell dies. If a large number of cells do this, the organ and overall health of the individual is affected. In the case of controlling free radicals, it seems you're damned if you do and damned if you don't. So again, we need to look at old codgers and the birds. It so happens there's a gene in certain Japanese men who are well over a hundred years old that leads to a tiny reduction in free radical leakage. If you have this gene, you're 50% more likely to live to be a hundred. You're also half as likely to end up in a hospital for any reason. As far as birds, they've got two things going for them. One, they disassociate their electron flow from ATP production, a process known as uncoupling. This, in effect, restricts leakage of free radicals. Secondly, birds have more mitochondria in their cells. Since they have more, it leads to a greater spare capacity at rest, and thus lowers the reduction rate and free radical release is lowered. So we're left with this: increasing mitochondrial density, along with slowing free radical leakage, would likely lead to a longer life, free from most of the diseases typically attributed to old age. Since mitochondria have their own genes, they're subject to mutations that affect their health and function. Acquire enough of these mutations, and you affect the way the cell functions. Affect enough cells, and you affect the organ/system they're a part of. The hardest hit organs are those that are generally mitochondria-rich, like muscles, the brain, liver, and kidneys. Specific mitochondria-associated diseases range from Parkinson's, Alzheimer's, diabetes, various vaguely diagnosed muscle weakness disorders, and even Syndrome X. Take a look at heart patients, for instance. Generally, they have about a 40% decrease in mitochondrial DNA. And, as evidence that mitochondrial deficiency might be passed down from generation to generation, the insulin-resistant children of Type II diabetics, despite being young and still lean, had 38% fewer mitochondria in their muscle cells. Mitochondria dysfunction has even been shown to predict prostate cancer progression in patients who were treated with surgery. Some of these mitochondrial diseases might not become apparent until the person with the funky mitochondria reaches a certain age. A youthful muscle cell, for example, has a large population (approximately 85%) of mitochondria that are mutation free and it can handle all of the energy demands placed on it. However, as the number of mitochondria decline with age, the energy demands placed on the remaining mitochondria rise. It ultimately reaches a point where the mitochondria can't produce enough energy and the affected organ or organs start to display diminished capacity. Clearly, mitochondria play a pivotal role in the genesis of a host of maladies, and maintaining a high degree of normal, healthy mitochondria could well eliminate many of them. You can intuit that muscle cells have a lot of mitochondria, and furthermore, you can easily realize that the more you have, the better your performance capacity. The more mitochondria, the more energy you can generate during exercise. As an example, pigeons and mallards, which are both species known for their endurance, have lots and lots mitochondria in their breast tissue. In contrast, chickens, which don't fly much at all, have very few mitochondria in their breast tissue. However, if you were to decide to train a chicken for a fowl version of a marathon, you could easily increase the number of mitochondria he had, but only to a point since the number is also governed to a point by species-dependent genetics. Luckily, you can also increase the number of mitochondria in humans. Chronic exercise can increase mitochondrial density and apparently, the more vigorous the exercise, the more mitochondria formed. In fact, if you know any delusional runners that tally upwards of 50 miles a week, tell them that 10 to 15 minutes of running at a brisk 5K pace could do much more for their ultimate energy production and efficiency than an upturn in total mileage. The short duration, high-intensity running will increase mitochondrial density to a much greater degree than long distance running, which, kind of ironically, will lead to better times in their long distance races. Weight training also increases mitochondrial density. Type I muscle fibers, often referred to as slow-twitch or endurance fibers, have lots of mitochondria, whereas the various type of fast-twitch fibers – Type IIa, Type IIx, and Type IIb – are each progressively less rich in mitochondria. And while it's true that heavy resistance training converts slow-twitch fibers to fast-twitch fibers, the relative number and efficiency of the mitochondria in each type needs to be kept at peak levels, lest the lifter start to experience a loss in muscle quality. This is what happens as lifters age. An aging human may be able to retain most or even all of his muscle mass through smart training, but loss of mitochondrial efficiency might lead to a loss of strength. One supportive study of aging males showed that this muscle strength declined three times faster than muscle mass. Clearly, maintaining mitochondrial efficiency while also maintaining or increasing their population would pay big dividends in strength and performance, regardless of age. Luckily, there are a lot of ways in which you can improve mitochondrial health and efficiency. There are even a couple of ways you can make more of them. Since the main problem in age-related decline of mitochondrial health overall seems to be free-radical leakage, we need to figure out how to slow this leakage over a lifetime. We could probably do this by genetic modification (GM), but given the public's horrific fear of genetic modification of any kind, the idea of inserting new genes into our make-up will have to be put aside for a while. The least controversial way seems to be through plain old aerobic exercise. Exercise speeds up the rate of electron flow, which makes the mitochondria less reactive, thus lowering (or so it seems) the speed of free radical leakage. Likewise, aerobic exercise, by increasing the number of mitochondria, again reduces the speed of free radical leakage. The more there are, the greater spare capacity at rest, which lowers the reduction rate and lessens the production of free radicals, hence longer life. The birds give us more clues. They "uncouple" their respiratory chains, which means they disassociate electron flow from the production of ATP. Respiration then dissipates as heat. By allowing a constant electron flow down the respiratory chain, free radical leakage is restricted. It turns out there are a few compounds that, when ingested by human, do the same thing. One is the notorious bug killer/weight loss drug known as DNP. Bodybuilders were big fans of this drug as it worked well in shredding fat. Users were easy to spot as they sported a sheen of sweat even when sitting in a meat locker. The trouble is, DNP is toxic. The party drug ecstasy works well, too, as an uncoupling agent. However, aside from causing severe dehydration and making mitochondria listen to techno music while having uninhibited sex, the drug poses all kinds of ethical/sociological implications that make its use problematical. Aspirin is also a mild respiratory uncoupler, which might help explain some of its weird beneficial effects. Another way we might be able to increase the number of mitochondria (which seemingly has the added benefit of resulting in less free radical leakage) is through the use of dietary compounds like pyrroloquinoline quinone (PQQ), a supposed component of interstellar dust. While PQQ isn't currently viewed as a vitamin, its involvement in cellular signaling pathways – especially those having to do with mitochondrial biogenesis – might eventually cause it to be regarded as essential to life. Taking PQQ has been shown to increase the number of mitochondria, which is exciting as hell. Other compounds that seem like they might work the same way are the diabetic drug Metformin and perhaps, since it shares some of the same metabolic effects as Metformin, cyanidin 3-glucoside (Indigo-3G®). Indeed, cyanidin 3-glucoside has been shown in lab experiments to be highly beneficial in preventing or fixing mitochondrial dysfunction. Aside from increasing the number of mitochondria, there are also a number of other dietary strategies that can enhance mitochondrial function or increase their number: The aforementioned "fixes" are a lot to swallow... literally. After thinking about it a lot, I've taken up a strategy that's based on pragmatism and the idea of potentially overlapping supplements. In other words, I take many of these things I listed, but almost everything I take has applications other than the care and feeding of my mitochondria. And if they have the added benefit of increasing mitochondrial life or efficiency, I'm sitting pretty. Specifically, I take the following: Lastly, I augment my lifting with a healthy dose of aerobic or semi-aerobic activity. Will feeding and nurturing your mitochondria really build muscle, end disease, and allow you to live forever? To be as precise as current science allows me to be, the answers are probably, kinda', and sort of. Increased mitochondrial efficiency and density would make your muscles more capable of generating power for longer amounts of time, which is pretty much a surefire recipe for more muscle, provided you're a decent muscle chef. Since many of the diseases that plague us can directly or indirectly be tied to mitochondrial function, there's a good chance that aiding and abetting them could eliminate or ameliorate many of them. And lastly, a slight, long-time reduction in free radical leakage seems like it could, theoretically, increase human lifespan by about 10 to 20%. Is it worth the trouble, given that we're operating on at least a few hunches? That's of course your call, but the story is too compelling and too potentially rewarding to ignore.
https://www.biotest.co.uk/blogs/articles/grow-muscle-end-disease-live-longer
Preamble. Some small alarm bells, dizziness, muscle cramps and small (but many) pains, that occur the day after an excursion in the mountains, made me realize that my body is no longer that of a twenty year old and requires more care and attention. With all this, I certainly do not say that I feel like Methuselah, but one thing is for sure, half a century of life takes its toll on joints, muscles and cardiovascular system. So, to continue to do my favorite activity (as usual) in the mountains, enjoying myself like a madman, would rather try to better understand my limits, not to overcome them and have bad surprises. Bioenergetic and biomechanical considerations. May the force developed by a muscle is proportional to its transverse surface is common knowledge that informs much of the preparatory exercises for strength sports, such as, for example., Lifting weights. It can be assumed that a muscle develops a force equivalent to an average of 50 N per cm ². The maximum muscle strength plays a dominant role in the progression of forms in the wall free climbing and more about the sports that the real mountain climbing. In this context it should consider the different behavior of the muscle tissue according to the type of fibers that constitute it. It is known, in fact, that the muscle fibers are divided into two main types: lens, with a majority oxidative metabolism (type I), and rapid, with predominantly anaerobic metabolism (type II). The fibers of the slow type, oxidative, constitute the majority of the drive units of the antigravity muscles to function, that is intended to support the weight of the body and held vertically. These are the muscles that incur greater mountaineer to fix the position on the wall during the progression. These muscles, located in the back and in the lower limbs, being very resistant to fatigue allow the maintenance of the position for more long term. Otherwise, the arm muscles are composed mostly of fast-type motor units. The strength that they can develop is great, but the rapid consumption of energy substrates to the work of the expensive anaerobic metabolism makes them very resistant to fatigue. It is therefore not possible to maintain for long periods of time the position on a wall with the body literally 'hung'. The relationship of inverse proportionality between the force and the speed of muscle contraction is of importance during the fast movements necessary to overcome the critical steps of , sometimes considerably exposed. In this case it may be useful to a previous countermovement of 'hunting' in order to store energy in elastic elements of type of muscle tissue, energy which is then returned during the bounce, a bit 'as happens in the race for preparation of athletes intent on stand out a jump. Muscles are inserted on the bone segments of the skeleton, normally in such a way that their length, at rest, is optimal at the end of the development of force. For this length, in myofibrils fact is the maximum possibility of interaction between the heads of myosin and actin reactive sites, namely the two muscular structures that constitute the power generator at the base of muscle contraction. It follows that the climber must avoid to grip on footings that require an excessive stretching. The muscles may still be 'trained' to work with longer lengths of so-called training exercises by stretching or elongation. Finally, the Biker must avoid taking positions or making efforts with isometric mode, acting, ie, against an infinite resistance. In this case muscle contraction, tetanus and type of protracted, initially and would prevent a disadvantage in the long run the necessary supply of oxygen by the circulation to the muscle which works. The blood vessels tributaries, in fact, would be compressed by the contracted muscles around them. In other words, this method of working leads rapidly in the muscle ischemic conditions, that is to place it in metabolic conditions of type anaerobico. Muscles draw the energy required for contraction by cleavage of the acid adenosintrifosforico (ATP), which must be renewed continuously. This occurs thanks to the use, in very short times (<10 s) of phosphocreatine or, in time a bit 'longer (about 40 s) of the anaerobic glycolysis resulting in the production of lactic acid or, finally, also for very long times the complete oxidation of carbohydrates and lipids. The first two mechanisms are limited in time, since they cause a rapid depletion of muscle energy reserves and, for the anaerobic glycolysis, including acidification of the muscle tissue. This puts a time limit to the short supply, but that does not exist for the oxidative mechanism. The net mechanical work done of the Biker is proportional to the product of weight lifted for the difference. The relationship between the mechanical work and the energy consumed, expressed in the same units of measurement, is the efficiency of the slope, in accordance with the laws of thermodynamics, is <1, a part of the energy being dispersed into heat. The climb takes place on a wall on uneven terrain, in a discontinuous and involves the activities of disparate muscle groups (arms in particular) are often used in static conditions, which leads to a yield of 0.08 to 0.1 (8 -10%), well below the optimum efficiency of muscle contraction (0.25 to 0.30). The myoglobin. The concentration of myoglobin is very high in the muscles aerobic or in those with slow contraction, in which are possible very high consumption of oxygen. Because of the high content of myoglobin, they assume a dark color. On the contrary, the muscles anaerobic or those which are contracted rapidly have a content of myoglobin much lower and tend to be pale. The slow twitch muscles also receive an abundant blood supply that makes them darker than the fast twitch. Lactic Acid. The lactate is produced already starting from low exercise intensity; the red blood cells, for example, the continuously formed even in conditions of complete rest. An adult male normally active produces about 120 grams of lactic acid per day; 40 g of these are produced from the tissues having an exclusively anaerobic metabolism (retina and red blood cells) the remainder from other tissues (mainly muscle) depending on the actual availability of oxygen. The human body has defense systems to protect themselves from lactic acid and can convert it back into glucose through the activity of the liver. The heart is instead able to metabolize the lactic acid to produce energy. From these statements it can be deduced such as lactic acid, although toxic, is not a real waste product. Thanks to a series of enzymatic processes that substance may in fact be used for the resynthesis of intracellular glucose. Recent studies point out that lactic acid is in reality only indirectly involved in increasing blood acidity. The main responsible for this phenomenon is the hydrogen ion H that during a physical exercise high intensity is released in large quantities for the increase of the hydrolysis of ATP. At the same intensity of exercise, the amount of lactic acid produced is inversely proportional to the degree of training of the subject. This means that if an athlete and a sedentary run at the same speed, the latter produces much more lactic acid compared to the first and disposing with more difficulties. During strenuous muscle work when the aerobic metabolism is no longer able to meet the increasing energy requirements, an accessory pathway is activated for the production of ATP called anaerobic mechanism lattacido. This phenomenon while compensating in part the lack of oxygen increases the proportion of lactic acid produced which in turn exceeds the capacity of neutralization by the body. The result of this process is an abrupt increase in the proportion of lactate in the blood which roughly corresponds to the frequency of the anaerobic threshold of the subject. The blood concentration of lactate in the blood is usually of 1-2 mmol / L at rest but during strenuous exercise can reach and exceed 20mmol / L. The anaerobic threshold, as measured by the blood concentration of lactic acid, is made to coincide with the heart rate value for which in the course of an incremental exercise is reached the concentration of 4mmoli / L. Lactic acid began to accumulate in the muscles and in the blood when the speed exceeds the speed of synthesis of disposal. Roughly, this condition is triggered when, during an intense exercise heart rate above 80% (for non-trained) and 90% (for more trained) of maximum heart rate. Increasing tolerance to lactic acid. The athletes involved in anaerobic lattacide disciplines (duration of stress between 30 and 200 seconds) are forced to compete in conditions of maximum production and accumulation of lactate. Their performance is then related to the efficiency of the anaerobic metabolism of lactate and disposal systems in the blood, liver and muscle. The purpose of the training targeted to the increase of these characteristics is to saturate the muscles of lactic acid in such a way that get used to work in conditions of strong acidity. At the same time this approach improves the effectiveness of the blood buffer systems (bicarbonate) to neutralize blood acidosis. The athlete has two training techniques to achieve improved performance anaerobic lactic acid: one based on continuous effort (20-25 minutes) to values of heart rate close to anaerobic threshold (± 2%) A method based on the work at intervals: 2-6 in athletics repeated for 1-4 sets of 150-400 meters at race pace or faster interspersed with partial recovery between repetitions (45-90 seconds) between sets and complete (5-10 minutes). Lactic acid is disposed in the space of 2 or 3 hours, and its quantity is halved every 15-30 minutes depending on training and the amount of lactic acid product. Contrary to what is often claimed, lactic acid is responsible for muscle soreness felt the next day to a very intense workout. This pain is caused by muscles that originate micro-injuries inflammatory processes, and there is an increase of blood and lymph activities that increase the sensitivity in the muscles most stressed areas. Lactic acid is a strong stimulus for the secretion of anabolic hormones such as GH and testosterone. This is why weight training exercises with high intensity, punctuated by pauses not too long, maximize the gain of muscle mass. The 65% lactic acid product is converted into carbon dioxide in water, 20% is converted into glycogen, 10% protein and 5% in glucose. Training at high altitudes. One of the consequences of hypoxia from altitude training in the muscles of new capillaries that allow blood to spread mainly by reducing the distance between the zones of diffusion. It also occurs an increase of the enzymes responsible for the reactions of aerobic metabolism, with the result of increasing the capture oxygen. Finally, the high-altitude hypoxia stimulates the production of red blood cells, thus increasing the number and enriching the blood hemoglobin in a few weeks. This increases the oxygen carrying capacity, but also produces an increased density of blood, resulting in reducing the frequency of heart rhythm. Sometimes the natives of the lands located at high altitudes have a disease (Monge's disease) due to which red blood cells constitute over 75% of the total volume of blood (the norm is 45%), so that the viscosity of blood increases by two times. Several weeks of training at high altitudes can be competitive in the conduct of sporting athletes living at low altitude against athletes from high mountain areas. However, it is not entirely proven that a period of training at high altitudes may be an advantage for an athlete who must then compete at sea level. This probably occurs because both the oxygen consumption that the amount of work done at higher elevations are lower than normal. Since at high altitudes you can not program a high level of training, it is possible that this produces a slight effect of "deprivation", with the result of counter any increase in ability to carry oxygen. Another explanation might be that training at high altitude reduces the buffering effect produced by bicarbonate in the blood. This, in turn, can make the athletes less able to counteract high concentrations of lactic acid which are produced during sports competitions. However, the normal content of bicarbonate of the blood and its buffer effect are usually restored in 24 hours, for which this explanation seems not to be correct. Finally, the resistance capacity is often limited by two concomitant factors, namely, the reduced blood volume and the increase of its viscosity when the body tends to dehydrate due to excessive sweating . Training at high altitude increases the concentration of red blood cells, the viscosity of the blood of athletes who train at high altitudes tends to rise and increasing more and more with the increase in sweating. Calculation of anaerobic threshold.Calculate the approximate value of heart rate corresponding to his anaerobic threshold is rather simple and fast. It is enough to subtract your age to 220 and multiply the result by 0.935. Let's see an example: a person of 40 years will have a maximum heart rate equal to 220-40 = 180 bpm (beats per minute). The frequency corresponding to the anaerobic threshold is equal to: 180 x 0.935 = 168 bpm. This calculation is valid for a trained person - in which the buffer systems and the organic adaptation generally ensure the efficient removal of the lactic acid product - but for a sedentary the frequency of anaerobic threshold may be much lower and placed around the 70% of HRmax. Conclusions. But with a little more weight, you have the opportunity to gather important biometric data of your body. When I reach the summit of a mountain in the Western Alps, there are two objects of my backpack that always arouse interest, my flask of brandy and heart rate (in order). The use of heart rate in the high mountains, is a useful way to get your Aerobic Threshold and not exceed it. As you read is about the 80-85% of your maximum heart rate. The heart rate monitors available today are sufficiently precise, small and not heavy (the thimble is really the minimum). A useful way to learn about their state of hydration is the hematocrit. Plus it raises your blood is more dense. Exist in the market of small appliances, and with a good degree of reliability, can provide an impressive amount of data on your metabolism. Regarding the integration of minerals lost through sweating, I personally prefer not to use integrators, at most, in hot weather, when forming the whitish halo at the edge of the sweat, I just slowly suck one tablet of Enervit. What seems clear is that our body needs the compensatory pauses, its mechanisms work fine (just do not abuse it). it is equally essential not to miss him ever to integrate water, is always the rule that if you drink when you are thirsty (it is late), you're already dehydrated.
https://www.mbpost.org/font-color-darkred-anaerobic-font-font-color-red-effort-font/285475
Mitochondria are often referred to as the powerhouses of the cell. They help turn the energy we take from food into energy that the cell can use. But, there is more to mitochondria than energy production. Present in nearly all types of human cell, mitochondria are vital to our survival. They generate the majority of our adenosine triphosphate (ATP), the energy currency of the cell. Mitochondria are also involved in other tasks, such as signaling between cells and cell death, otherwise known as apoptosis. In this article, we will look at how mitochondria work, what they look like, and explain what happens when they stop doing their job correctly. Mitochondria are small, often between 0.75 and 3 micrometers and are not visible under the microscope unless they are stained. Unlike other organelles (miniature organs within the cell), they have two membranes, an outer one and an inner one. Each membrane has different functions. Mitochondria are split into different compartments or regions, each of which carries out distinct roles. Some of the major regions include the: Outer membrane: Small molecules can pass freely through the outer membrane. This outer portion includes proteins called porins, which form channels that allow proteins to cross. The outer membrane also hosts a number of enzymes with a wide variety of functions. Intermembrane space: This is the area between the inner and outer membranes. Inner membrane: This membrane holds proteins that have several roles. Because there are no porins in the inner membrane, it is impermeable to most molecules. Molecules can only cross the inner membrane in special membrane transporters. The inner membrane is where most ATP is created. Cristae: These are the folds of the inner membrane. They increase the surface area of the membrane, therefore increasing the space available for chemical reactions. Matrix: This is the space within the inner membrane. Containing hundreds of enzymes, it is important in the production of ATP. Mitochondrial DNA is housed here (see below). Different cell types have different numbers of mitochondria. For instance, mature red blood cells have none at all, whereas liver cells can have more than 2,000. Cells with a high demand for energy tend to have greater numbers of mitochondria. Around 40 percent of the cytoplasm in heart muscle cells is taken up by mitochondria. Although mitochondria are often drawn as oval-shaped organelles, they are constantly dividing (fission) and bonding together (fusion). So, in reality, these organelles are linked together in ever-changing networks. Also, in sperm cells, the mitochondria are spiraled in the midpiece and provide energy for tail motion. Although most of our DNA is kept in the nucleus of each cell, mitochondria have their own set of DNA. Interestingly, mitochondrial DNA (mtDNA) is more similar to bacterial DNA. The mtDNA holds the instructions for The human genome stored in the nuclei of our cells contains around 3.3 billion base pairs, whereas mtDNA consists of During reproduction, half of a child’s DNA comes from their father and half from their mother. However, the child always receives their mtDNA from their mother. Because of this, mtDNA has proven very useful for tracing genetic lines. For instance, mtDNA analyses have concluded that humans may have originated in Africa relatively recently, around 200,000 years ago, descended from a common ancestor, known as Although the best-known role of mitochondria is energy production, they carry out other important tasks as well. In fact, only about 3 percent of the genes needed to make a mitochondrion go into its energy production equipment. The vast majority are involved in other jobs that are specific to the cell type where they are found. Below, we cover a few of the roles of the mitochondria: Producing energy ATP, a complex organic chemical found in all forms of life, is often referred to as the molecular unit of currency because it powers metabolic processes. Most ATP is produced in mitochondria through a series of reactions, known as the citric acid cycle or the Krebs cycle. Energy production mostly takes place on the folds or cristae of the inner membrane. Mitochondria convert chemical energy from the food we eat into an energy form that the cell can use. This process is called oxidative phosphorylation. The Krebs cycle produces a chemical called NADH. NADH is used by enzymes embedded in the cristae to produce ATP. In molecules of ATP, energy is stored in the form of chemical bonds. When these chemical bonds are broken, the energy can be used. Cell death Cell death, also called apoptosis, is an essential part of life. As cells become old or broken, they are cleared away and destroyed. Mitochondria help decide which cells are destroyed. Mitochondria release cytochrome C, which activates caspase, one of the chief enzymes involved in destroying cells during apoptosis. Because certain diseases, such as cancer, involve a breakdown in normal apoptosis, mitochondria are thought to play a role in the disease. Storing calcium Calcium is vital for a number of cellular processes. For instance, releasing calcium back into a cell can initiate the release of a neurotransmitter from a nerve cell or hormones from endocrine cells. Calcium is also necessary for muscle function, fertilization, and blood clotting, among other things. Because calcium is so critical, the cell regulates it tightly. Mitochondria play a part in this by quickly absorbing calcium ions and holding them until they are needed. Other roles for calcium in the cell include regulating cellular metabolism, steroid synthesis, and Heat production When we are cold, we shiver to keep warm. But the body can also generate heat in other ways, one of which is by using a tissue called brown fat. During a process called The DNA within mitochondria is more susceptible to damage than the rest of the genome. This is because free radicals, which can cause damage to DNA, are produced during ATP synthesis. Also, mitochondria lack the same protective mechanisms found in the nucleus of the cell. However, the majority of mitochondrial diseases are due to mutations in nuclear DNA that affect products that end up in the mitochondria. These mutations can either be inherited or spontaneous. When mitochondria stop functioning, the cell they are in is starved of energy. So, depending on the type of cell, symptoms can vary widely. As a general rule, cells that need the largest amounts of energy, such as heart muscle cells and nerves, are affected the most by faulty mitochondria. The following passage comes from the United Mitochondrial Disease Foundation: “Because mitochondria perform so many different functions in different tissues, there are literally hundreds of different mitochondrial diseases. […] Because of the complex interplay between the hundreds of genes and cells that must cooperate to keep our metabolic machinery running smoothly, it is a hallmark of mitochondrial diseases that identical mtDNA mutations may not produce identical diseases.” Diseases that generate different symptoms but are due to the same mutation are referred to as genocopies. Conversely, diseases that have the same symptoms but are caused by mutations in different genes are called phenocopies. An example of a phenocopy is Leigh syndrome, which can be caused by several different mutations. Although symptoms of a mitochondrial disease vary greatly, they might include: - loss of muscle coordination and weakness - problems with vision or hearing - learning disabilities - heart, liver, or kidney disease - gastrointestinal problems - neurological problems, including dementia Other conditions that are thought to involve some level of mitochondrial dysfunction, include: Over recent years, researchers have investigated a link between mitochondria dysfunction and aging. There are a number of theories surrounding aging, and the mitochondrial free radical theory of aging has become popular over the last decade or so. The theory is that reactive oxygen species (ROS) are produced in mitochondria, as a byproduct of energy production. These highly charged particles damage DNA, fats, and proteins. Because of the damage caused by ROS, the functional parts of mitochondria are damaged. When the mitochondria can no longer function so well, more ROS are produced, worsening the damage further. Although correlations between mitochondrial activity and aging have been found, not all scientists have reached the same conclusions. Their exact role in the aging process is still unknown. In a nutshell Mitochondria are, quite possibly, the best-known organelle. And, although they are popularly referred to as the powerhouse of the cell, they carry out a wide range of actions that are much less known about. From calcium storage to heat generation, mitochondria are hugely important to our cells’ everyday functions.
https://www.medicalnewstoday.com/articles/320875
Distributed within the cell and in the extracellular matrix, calcium signaling acts as an initiator for a variety of processes related to muscle contraction, hormone, and neurotransmitter secretion, as well as fertilization. Calcium has been shown to be the fifth most abundant element in the human body by weight. Like the other elements (carbon, hydrogen, oxygen, and nitrogen) it impacts various aspects of cellular life and is essential for life and development. The following sections will focus on the role of calcium signaling in: Generally, muscle contraction involves the tightening, lengthening, and shortening of muscles. This cycle is essential for a number of activities including movement, joint stability, maintenance of body temperature, as well as posture. Calcium is specifically required for the contraction of actomyosin fibers. Each muscle consists of a bundle of muscle fibers (10 to 100 fibers) known as fasciculi within the perimysium (connective tissue covering the fibers). Each of the fibers is also covered by a connective tissue known as endomysium. The fibers themselves are made up of rod-like organelles known as myofibril (or muscle fibril) which consist of myosin and actin. The two proteins (myosin and actin) are involved in contraction and consequently in muscle contraction. * Actin and myosin (thicker and characterized by numerous heads) filaments are arranged into functional units known as sarcomere. * Shortening of the sarcomere results in muscle contraction. Diagrammatic representation of a sarcomere: Within the muscle, myofibrils are surrounded by tubules (transverse tubules) and channels. The tubules are surrounded by sarcoplasmic reticulum where calcium ions are stored. Generally, muscle contraction requires a signal to activate the process. Following a nerve impulse, acetylcholine (neurotransmitter) is released onto the sarcolemma (the thin membrane covering the muscle fiber) and binds to the receptors which send a signal into the membrane. The signal then travels into the cell through the tubules causing dihydropyridine receptors on the tubules to interact with Ryanodine receptors located on the sarcoplasmic reticulum. This interaction causes channels located on the sarcoplasmic reticulum to open and release calcium ions into the cell. Calcium ions then bind to troponin (proteins located on tropomyosin along the actin). * Before calcium release, troponins prevent myosin heads from interacting with actin. Once the calcium ions bind to the troponin, they cause a conformational change that moves the troponin-tropomyosin complex in order to expose binding sites on the actin. This allows myosin heads to bind to actin through the binding sites located on actin (ATP provides the energy required for the binding process). The release of ADP (following ATP hydrolysis) causes the myosin head to flex which in turn moves the actin into the direction causing contraction. This process is repeated when another ATP molecule is attached to the myosin head. * When calcium ions are pumped back into the sarcoplasmic reticulum through the terminal cisternae, the troponin-tropomyosin complex is restored preventing actin and myosin heads from interacting. * This cycle of muscle contraction is known as the cross-bridge cycle or the sliding filament theory. Here, calcium acts as a second messenger in the signaling process. Mitochondria are primarily involved in the production of chemical energy (ATP). Given that cellular biochemical reactions are powered by ATP, mitochondria are some of the most important organelles. Based on a variety of studies, calcium signaling has been shown to help in regulating this process. As well, signaling can also activate a series of events that ultimately results in cell death. Most of the calcium ions enter mitochondria through the mitochondrial calcium uniporter. Because the uniporter has a low affinity for calcium, it's assumed that the protein functions as a channel, allowing the inflow of calcium with increased concentration in the matrix (made possible by an electrochemical potential gradient). That said, it's worth noting that the molecular characteristics of the uniporter is still largely unknown. Aside from the uniporter, calcium ions can also enter the organelle through LETM1 (Leucine zipper-EF-hand containing transmembrane protein 1) and NCLX/NCKX6. However, these routes only allow entry in lower concentrations. In the mitochondrion, calcium is suspected of stimulating intramitochondrial pyruvate dehydrogenase phosphatase (responsible for the dephosphorylation of serine in a subunit of pyruvate dehydrogenase). In doing so, calcium ions activate pyruvate dehydrogenase resulting in increased production of ATP following the increased release of NADH and FADH2. In studies involving the protozoan T. brucei TcPDP and TbPDP were shown to be calcium-sensitive phosphatases. Activation of these enzymes by the influx of calcium into mitochondria resulted in an increased AMP/ATP ratio. * In the Krebs cycle, calcium is suspected to regulate three main enzymes namely, pyruvate dehydrogenase, isocitrate dehydrogenase, and α-ketoglutarate dehydrogenase. These are important enzymes involved in different stages of the Krebs cycle. * With increased physical activities, studies have noticed an increase in matrix calcium and increased activation of mitochondrial dehydrogenases. These enzymes promote the increase of nicotinamide adenine dinucleotide and consequently ATP by fueling electron transport. Initially, mitochondrial calcium was thought to be involved in the regulation of cytosolic calcium. In subsequent studies, however, the influx of calcium into the organelle was shown to have an impact on dehydrogenases (enzymes involved in the removal of hydrogen during oxido-reduction reactions) of the tricarboxylic acid cycle (Krebs cycle). In hepatocytes and sensory neurons, this influx has been associated with an increase in NADH which shows that calcium promotes energy production. The Krebs cycle takes place right after glycolysis (in the presence of oxygen). Here, pyruvate produced through glycolysis is converted into Acetyl COA by the enzyme pyruvate dehydrogenase complex. This stage also results in the production of carbon dioxide and NADH. Acetyl-COA is then combined with Oxaloacetate to form citrate by the enzyme citrate synthase. In the next step, the citrate, a 6 carbon molecule, is isomerized by the enzyme aconitase into isocitrate which is in turn oxidized to form Alpha-ketoglutarate (α-Ketoglutarate) by isocitrate dehydrogenase. This also results in the production of carbon dioxide as well as a molecule of NADH. Alpha-Ketoglutarate is then converted to succinyl COA in the presence of alpha-ketoglutarate dehydrogenase. As with the previous step, a molecule of carbon dioxide and NADH are also released. Succinyl COA, a four-carbon molecule is then converted to succinate by succinyl COA synthase. Guanosine triphosphate is also produced. Succinate is then converted to fumarate by succinate dehydrogenase. The QH2 (Ubiquinol) molecule released in this step is then used for the production of FADH2 (flavin adenine dinucleotide). The fumarate is then converted to malate by fumarase. Lastly, malate is converted to Oxaloacetate by malate dehydrogenase. This step also gives off a molecule of NADH. * A single cycle of the cycle yields 6 molecules of NADH and 2 FADH2 which are used to produce ATP energy. High levels of calcium in the mitochondria have also been associated with programmed cell death (apoptosis). Calcium-sensitive factors associated with apoptosis can also be found in the endoplasmic reticulum and cytoplasm. Generally, an increased influx of calcium into the mitochondria is suspected to increase mitochondrial membrane permeability and eventually alter the structure of the membrane. In doing so, calcium accumulation results in the opening of transition pores (mitochondrial permeability transition pores) and subsequent release of pro-apoptotic factors like cytochrome. * In normal cells, cytochrome c is located in the mitochondrial intermembrane/intercristae spaces. Here, it's attached to the phospholipid cardiolipin and plays an important role in the respiratory chain. Following the disruption of the outer membrane by calcium ions or any other factor, cytochrome is separated from the phospholipid and released into the cytosol. In the cytosol, cytochrome activates the apoptosis-protease activating factor which is vital for the proteolytic development of caspase-9 and 3 which are actively involved in cell destruction. Transcription is a vital process through which RNA copies of gene sequences are made. The copies (mRNA) are then used as a blueprint for protein synthesis in the cytoplasm. The process through which proteins are synthesized from the mRNA template (transcript) is known as translation. * The transcription process can be divided into several stages including transcription initiation, elongation, and termination. Between initiation and elongation, there are also a few intermediate steps namely, promoter-proximal pausing and pause release. Whereas some of the calcium ions are released into the cytoplasm from intracellular stores (e.g. endoplasmic reticulum), others are imported from the extracellular matrix through the voltage-dependent calcium channels or receptor-operated channels. Calcium signaling has been shown to play a role in several stages of transcription including: During transcription, pause release is characterized by the release of paused Poll II (polymerase II) for elongation. During this phase of transcription, calcium signaling can contribute to the phosphorylation of CDK9 (Cyclin-Dependent Kinases) at Thr-186. This is an important process that activates the kinase function of P-TEFb (positive transcription elongation factor) in polymerase regulation. In studies involving HeLa cells, the inhibition of CamK and calmodulin (calcium-binding messenger protein) using KN-93 has been shown to reduce the phosphorylation of CDK9 Thr-186. This was evidence that calcium plays a role in the phosphorylation process. In other studies involving CD4+ cells, the use of Thapsigargin (a non-competitive inhibitor) has also been shown to reduce the phosphorylation of CDK9 Thr-186. As an inhibitor, Thapsigargin prevents the re-uptake of calcium ions into the endoplasmic reticulum through the P-type ATPase SERCA. As a result, the endoplasmic reticulum is unable to release calcium into the cytoplasm in response to signaling. As a result of calcium depletion, phosphorylation is hindered. Transcription elongation involves the synthesis of the RNA chain as RNA polymerase moves along the template DNA strand. Following pause release, a number of studies have reported that calcium is involved in regulating blocks associated with this process. For instance, the use of calcium ions ionophore promotes the elongation of fos transcripts. This allows extracellular calcium to pass through the membrane and activate calcium signaling. When media without calcium ions is used, studies have revealed diminished fos induction. * In rats, studies have also identified another calcium-regulated transcription block in exon 1 (a sequence of DNA in mature mRNA) of MKP-1. Calcium signaling has also been linked to splicing as well as changes in the rate of transcription elongation. This may take different forms. In rats, for instance, the modification of histones by calcium signaling has a direct impact on the rate of elongation. One of the best examples of this is the acetylation of histone 2B following induction of calcium signaling by depolarization of hippocampal cells in rats. Histone acetylation is a crucial process through which chromatin architecture is changed during transcription. Therefore, the rate of acetylation, influenced by calcium signaling, plays a major role in the rate of transcription. Though the actual role of calcium signaling in alternative splicing is still under investigation, a number of studies have found a relationship between the two (calcium signaling and alternative splicing). For example, the influx of calcium ions following depolarization of mouse neuroblastoma cell line N2 has been shown to influence the skipping of NCAM (Neural cell adhesion molecule) exon 18. This, however, is largely dependent on the rate of H3 and H4 acetylation which is also influenced by calcium influx. In another study, the depolarization of cardiomyocytes in mice was associated with the hyperacetylation of histones H3 and H4 as well as the subsequent splicing in the number of genes. Calcium signaling is also involved in transcription termination. As the name suggests, transcription termination is the process where a new RNA dissociates with the polymerase and DNA template strand. This, based on various findings, is a result of given stress conditions. With respect to reproduction, calcium signaling plays an important role in the maturation of oocytes as well as fertilization. * Oocytes are ovarian cells that can divide (through meiosis) to form an ovum. Generally, a number of factors are involved in the maturation and development of oocytes in mammals. For instance, Cyclic adenosine monophosphate (cAMP), which acts as a second messenger, is associated with the first meiotic arrest. As well, a surge in Luteinizing hormone (LH) triggers its (oocyte) maturation. Like these factors, calcium ions have also been shown to play a regulatory role in the development of oocytes. For instance, in mice, the influx of extracellular calcium ions into the cell triggers meiotic resumption thus promoting the development of the egg. After the second meiotic arrest, embryo development is influenced by fertilization. Here, phospholipase Czeta from the sperm cells activates increased release and oscillations of calcium ions in the cytoplasm of the egg (calcium is released from intracellular stores of the eggs). The calcium then binds to the calcium-binding messenger calmodulin, which in turn triggers Ca2+/calmodulin-dependent protein kinase II. The kinase activates a signaling cascade that results in the down-regulation of the Maturation-promoting factor (MPF) and thus an exit from meiosis. However, this also allows for the resumption of the egg cycle (mitotic division). * In studies involving mice, sperm lacking the PLC protein were found to be incapable of causing calcium oscillations in the oocyte. Exocytosis refers to the process through which secretory vesicles fuse with the plasma membrane resulting in the discharge of vesicular contents into the extracellular space. This process serves a number of functions including the removal of unwanted material from the cell as well as cell communication. In synaptic transmissions, for instance, calcium ions help influence the movement of vesicles towards the membrane so they can release their contents. Near the axon terminal, the influx of sodium ions causes the area to become positive (positively charged). This stimulates the calcium gated channels to open and allow entry of calcium ions. At the synapse, calcium stimulates the vesicles to move towards and fuse with the membrane so they can release their contents (neurotransmitters) into the synaptic cleft. From the synaptic cleft, these contents then move to the post-synaptic neuron where they bind to receptors and activate a given action. * The release of neurotransmitters is inhibited if calcium channels are blocked. Four steps of Calcium Signaling Return From "What is the Function of Calcium Signaling?" to MicroscopeMaster home Clapham. D. (2007). Calcium Signaling. Harvard Medical School, Howard Hughes Medical Institute, Enders 1309. Finkel, T. et al. (2015). The Ins and Outs of Mitochondrial Calcium. Circulation Research. Pang, Z. and Sudhof, T. (2011). Cell Biology of Ca2+-Triggered Exocytosis. National Library of Medicine. Patergnani, S. et al. (2011). Calcium signaling around Mitochondria Associated Membranes (MAMs). Cell Communication and Signaling volume. Links https://www.sciencedirect.com/topics/neuroscience/calcium-signaling https://pubmed.ncbi.nlm.nih.gov/25646377/#:~:text=The%20actomyosin%20fibers%20responsible%20for,or%20release%20from%20intracellular%20stores. https://rep.bioscientifica.com/view/journals/rep/145/4/R97.xml Find out how to advertise on MicroscopeMaster!
https://www.microscopemaster.com/function-of-calcium-signaling.html
Pdf biological energy conversion in mitochondria is carried out by the membrane protein complexes of the respiratory chain and the mitochondrial atp. Read on to know about the structure and functions of the organelle. There is also a gallery of all images added sorted by date new images the search option will only search by image file name. The inner mitochondrial membrane is much less permeable to ions and small molecules than the outer membrane, therefore providing compartmentaliza tion. Some cells have several thousand mitochondria while others have none. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Unwanted and excess cells are pruned away during the development of an organism. Mitochondria are integral to normal cellular function as they are responsible for energy production in eukaryotes, including the synthesis of phospholipids and heme, calcium homeostasis, apoptotic activation and cell death. Mitochondria host several enzymatic pathways such as the first few steps of the urea cycle but by far the most important is the citric acid or krebs cycle. Hummingbird flight muscle is the richest sources of mitochondria. They play an important role in the process of programmed cell death. Mitochondria are no longer conceptualized as simply the power houses of the cell ie, referring to the role of the mitochondrial in producing energy but are now known to play important roles in a diverse range of cellular processes from apoptosis to immunity. Muscle cells need a lot of energy so they have loads of mitochondria. Although mitochondria have maintained the double membrane character of their ancestors and the core of atp production, their overall form and composition have been drastically altered, and they have acquired myriad additional functions within the cell. The outer and inner membranes of mitochondria are made up of proteins and phospholipids. Pdf the internal structure of mitochondria researchgate. A mitochondrion is an organelle, a specialized structure found inside almost all eukaryotic cells. The structure of mitochondria mitochondria have two membranes protective coverings one surrounding the other, called the inner and outer mitochondrial membranes. It is unclear if animal mtdna is also organized as dnaprotein complexes nucleoids, although this possibility has been. The mitochondria constitute the major source of ros inside the cell, with mitochondrial complexes i and iii being the main sites of superoxide generation, contributing the most to reactive. Some cells in some multicellular organisms may, however, lack mitochondria for example, mature mammalian red blood cells. Mitochondria, discovered by benda 1898, are present in. Though mitochondria of different cells are similar in an animal, they vary slightly in structure. A unique feature of mitochondrial structure, which is the basis for identi fication of mitochondria in tissue sections, is the internal membrane or cristae 15, 16. The cytoplasm of nearly all eukaryotic cells contain mitochondria, although there is at least one exception, the protist chaos pelomyxa carolinensis. Morphology and functionmitochondria are intracellular organelles, with primary functions including the maintenance of energy homeostasis, cell integrity and survival simcox and. Mitochondria video structure of a cell khan academy. The primary function of mitochondria is to provide the energy required for various cellular activities, most significantly the formulation of energy. The outer membrane covers the surface of the mitochondrion, while the inner membrane is located within and has many folds called cristae. Mitochondria also help in the building of certain parts of the blood, and hormones like testosterone and estrogen. Sep 22, 2017 piece by piece, the circuit diagram for electron transport in the mitochondria has come closer to completion. The primary mitochondrial function in animal cells is the production of energy in the form of atp, and the regulation of cellular metabolism. Inside the mitochondria, shorten cristae and vacuolization of the matrix are seen, and the density of mitochondria in skeletal muscle drops appreciably. Mitochondrial dna structure and function mitochondria. Cell mitochondria introduction this lecture introduces the cytoplasmic organelles that produce the energy required for cellular processes to occur. Structure of mitochondrial dna with diagram cell biology. Mitochondria replication some tissue specific protein encoded in cell nucleus ammonia detoxification liver cell import protein from outside import most lipid from outside phosphatidylcholine, serine synthesis in er and transport to outer membrane phospholipid to synthesis cardiolipin why have own genetics and proteins evolutionary deadend. Eukaryotic cells contain a nucleus and membranebound organelles. An introduction to mitochondria, their structure and. Finally, aga or agg in mitochondria code for a stop codon instead of arginine table i anderson et al. Mitochondria have two membranes, an outer membrane and an inner membrane. Neurons cells that transmit nerve impulses dont need as many. Biological energy conversion in mitochondria is carried out by the membrane protein complexes of the respiratory chain and the mitochondrial atp synthase in the. The structure comprises an outer membrane, an inner membrane, and a gellike material called the matrix. The role of mitochondria in longevity and healthspan. The key function of mitochondria is energy production through oxidative phosphorylation and lipid oxidation. The mitochondrion sing mitochondria is a eukaryotic membraneenclosed cell organelle that generates chemical energy in the form of atp needed for the metabolic activities of the cell. Mitochondria structure thermo fisher scientific us. Introduction to mitochondria part 1 from bench to bedside. Mitochondrial structure and function in plants springerlink. Mitochondria are found in eukaryotic cells, where they make up as much as 10% of the cell volume. This lecture introduces the cytoplasmic organelles that produce the energy required for cellular processes to occur. Ata codes for methionine in mitochondria but isoleucine in the cytosol. The structure and function of the mitochondrion semantic scholar. Liver cells contains approximately 1600 mitochondria, while kidney cells have mitochondria, while some oocytes around 300000 mitochondria. They are pleomorphic organelles, with structural variations depending on cell type, cellcycle stage, and intracellular metabolic state. Mitochondria are called the powerhouse of the cell. They contain a number of enzymes and proteins that help process carbohydrates and fats obtained from the food we eat to release energy. By far the majority of mitochondrial proteins, about 99%, are made outside the mitochondria in the. Structure of mitochondria mitochondria are rodshaped, double membrane bound organelles. Within the mitochondria, as the chemical bonds in fat in the form of triglyceride, carbohydrate in the form of glucose and glycogen and protein in the form of amino. The structure and function of the mitochondrion thinking writing. Regulation of mitochondrial structure and dynamics by the. The structural paradigm defined by em tomography is helping in the design of new experimental approaches to mitochondrial function. The membranes are made up of phospholipids and proteins. The energy factory story of mitochondria mitochondria are often and accurately referred to as the foodburning furnaces in a persons body cells schardt, 2008. The reactions that are involved in the production of atp are collectively known as the citric acid cycle or krebs cycle. The mitochondrion is a doublemembraned, rodshaped structure found in both plant and animal cell its size ranges from 0. Nov 21, 2014 introduction to mitochondria part 1 this is part of a basic science introduction to mitochondria. Mitochondria can be seen in the light microscope, but their detailed internal structure is only revealed by electron microscopy. The structure and function of mitochondria scienceaid. Purchase the structure of mitochondria 1st edition. The mitochondria plural mitochondria is a membrane bound structure found in both eukaryotic plant and animal cells. The role of mitochondria in cancer and other chronic diseases. Structure and function of mitochondrial membrane protein. Complete structure of mitochondrial respiratory supercomplex. They are rodshaped structures that are enclosed within two membranes the outer membrane and the inner membrane. They are especially abundant in cells and parts of cells that are associated with active processes. Structure and role in respiration wiley online library. This releases a large amount of free energy which is conserved in the acid anhydride linkages of atp molecules. Each new structure obtained for any of the five respiratory complexes. Mitochondrial dna consists of 510 rings of dna and appears to carry 16,569 base pairs with 37 genes proteins, 22 trnas and two rrna which are concerned with the production of proteins involved in respiration. Mitochondria are small organelles floating free throughout the cell. Enzymes in this pathway can be found in the mitochondrial matrix, and they work in sequence to convert pyruvate from the cytoplasm into carbon dioxide molecules. Folded into cristae to increase the surface area to maximize the rate at which atp is produced. Mitochondrial functioning in intermediary metabolism. Jan 16, 2014 mitochondria arose around two billion years ago from the engulfment of an. Aerobic respiration is a process common to almost all eukaryotic organisms and involves the controlled oxidation of reduced organic substrates, carbohydrates, lipids, amino acids and organic acids, to co 2 and h 2 o, in mitochondria. Mitochondria are membrane bound cell organelles, associated with cellular respiration, the source of energy, being termed as power houses of cell. Useful notes on the ultrastructure of mitochondria 2156. Now before i get into the structure of mitochondria, i wanna talk a little bit about its fascinating past because we think of cells as the most basic unit of life and that is true, that comes straight out of cell theory, but it turns out the most prevalent theory of how mitochondria got into our cells is that at one time the predecessors, the. Electron microscopy and tomography of mitochondria in a cellfree model of apoptosis derived from xenopus eggs have shown that, although cytochrome c was released from mitochondria and changes characteristic of apoptosis e. The role of mitochondria in cancer and other chronic diseases ros damage as stated above, a twoedged sword regarding atp production is the simultaneous production of necessary ros that may have a role in gene regulation and excessive damaging ros leading to the disease state, under both aerobic and anaerobic conditions. This second role will be covered in detail in later lectures in this current series. The inner membrane is highly folded and forms structures called cristae, the machinery for energy generation can be found on these cristae. In later posts i will be addressing the pathophysiology and treatment of neurodegenerative diseases among other things which will have a lot to do with the functioning of mitochondria. Humans and other animals have a mitochondrial genome size of 17 kbp and protein genes. The folds increase surface area of the membrane, which is important. In recent years mitochondria have also been shown to have important roles in other cellular functions, in particular, cell death by apoptosis. Mitochondria are the working organelles that keep the cell full of energy. This article provides information about the ultrastructure of mitochondria, its kinds, associated granules and mitochondrial particles. Mitochondria use proteins to break down sugars and produce cellular energy in the form of atp. Mitochondria found in plant and animal cells comprise the following components. Structure and functions mitochondria are present in both plant and animal cells. Nucleoids contain between 3 and 4 mitochondrial genomes and as many as 20 different polypeptides miyakawa et al. Structure of mitochondria the mitochondrion is a doublemembraned, rodshaped structure found in both plant and animal cell.
https://gionalmana.web.app/459.html
The interplay of respiration, circulation, including the metabolism, is a key to the respiratory system functioning as a whole. The cells set demand for the oxygen uptake and carbon dioxide (CO2) discharge, which means gas exchange in the lungs. The blood circulation will link the sites of the utilization of oxygen and uptake. The exact functioning of the respiratory system is based on both the ability of the system to make functional adjustments to differential needs and the design features of the structure sequence involved, which set the respiration limit. The major purpose of respiration is given as to provide oxygen to the cells at an adequate rate to satisfy their metabolic needs. This involves the oxygen transport from the lungs to the tissues by means of blood circulation. In the medieval and antiquity period, the heart was regarded as a furnace, in which the “fire of life” kept the blood boiling. Also, modern cell biology has already unveiled the truth, which is behind the metaphor. Every cell will maintain the mitochondria, which is given as a set of furnaces, through the foodstuff oxidation such as glucose, where the cell’s energetic needs are supplied. Therefore, the precise object of respiration is the oxygen supply to the mitochondria. Cell metabolism depends upon the energy, which is derived from the high-energy phosphates like adenosine triphosphate (ATP), whose third phosphate bond may release an energy quantum to fuel several cell processes, such as the synthesis of protein molecules or the contraction of muscle fibre proteins. In this process, the ATP is degraded to the adenosine diphosphate (ADP), which is a molecule that contains only two phosphate bonds. To recharge this molecule by adding the third phosphate group needs energy derived from the breakdown of substrates or foodstuffs. There are two pathways available as given below: Anaerobic glycolysis, or the fermentation that operates in the absence of oxygen; and Aerobic metabolism needs oxygen and involves the mitochondria. The anaerobic pathway creates acid waste products and is resource-intensive: It means that when one glucose molecule is broken down, only two ATP molecules are generated. In contrast, the aerobic metabolism contains a higher yield (36 molecules of ATP per one molecule of glucose) and results in the “clean wastes,” which are water and carbon dioxide (CO2), which can be easily eliminated from the body and are recycled by the plants in the photosynthesis process. The aerobic metabolic pathway is therefore preferred for any prolonged high-level cell activity. Since the oxidative phosphorylation takes place only in the mitochondria, and since every cell must produce its own ATP (where it cannot be imported), the number of mitochondria present in a cell reflects its capacity for aerobic metabolism or its required oxygen. High Altitudes The ascent from sea level to high altitude contains well-known effects upon respiration. The progressive fall in the barometric pressure is accompanied by a fall in the oxygen’s partial pressure, both in the alveolar spaces and ambient air of the lung, and it is the fall that poses the main respiratory challenge to humans at high altitude. Humans, as well as a few other mammalian species such as cattle, adapt to the drop in oxygen pressure by the reversible acclimatisation process, which begins, whether intentionally or not, with time spent at high altitudes. Llamas, for example, are wild mountain animals with a heritable and genetically dependent adaptation. Humans may achieve respiratory acclimatisation through activating pathways that raise oxygen partial pressure at all stages in the respiratory process, from the alveolar spaces in the lungs to the mitochondria in cells, where oxygen is needed for the ultimate biochemical expression of respiration. In the ambient partial pressure of oxygen, the decline is offset to a few extents by the greater ventilation that takes the deeper breathing form rather than a faster rate at rest. The diffusion of oxygen through the alveolar walls into the blood is encouraged, and the alveolar walls are provided as thinner at altitude relative to sea level in a few laboratory animal experiments. The scarcity of oxygen at the high altitudes stimulates an increased production of red blood cells and haemoglobin, which increases the oxygen amount transported to the tissues. And, the extra oxygen is released by the increased levels of inorganic phosphates in the red blood cells, like 2,3-diphosphoglycerate (2,3-DPG). With a prolonged altitude stay, the tissues develop several blood vessels, and, as the capillary density is increased, the diffusion path length along which gases must pass is decreased. It is a factor augmenting gas exchange. In addition, the muscle fibre size decreases, which also shortens the oxygen’s diffusion path. The respiration’s initial response to the fall of oxygen partial pressure in the blood on the ascent to the high altitude takes place in two small nodules and the carotid bodies, which are attached to the division of the carotid arteries on any side of the neck. The carotid bodies expand as oxygen loss continues, but they become less vulnerable to the lack of oxygen. The thickening of small blood vessels in the pulmonary alveolar walls is related to low oxygen partial pressure in the lungs, as is a minor rise in pulmonary blood pressure, which is believed to boost oxygen perfusion of the lung apices. 1. Give the Facts of the Human Respiratory System? Answer: The brain triggers breathing in response to the increased levels of CO2 as opposed to reduced oxygen levels. It is a very common misconception that it is hypoxia (which means the reduced oxygen) that makes us breathe. Also, our respiratory drive is more hypercapnic, which means it is dependent on CO2 receptors or levels. 2. Explain the Working of the Human Respiratory System? Answer: Breathing starts at the mouth and nose. We inhale air into our mouth or nose, and that air travels down the back of our throat and into the windpipe or trachea. Then, our trachea divides into air passages, which are called bronchial tubes. For human lungs to perform their best, these airways require to be open during the process of inhalation and exhalation and free from swelling or inflammation and abnormal or excess amounts of mucus. 3. How the Respiratory System Relates to the Circulatory System? Answer: The respiratory system has pulmonary circulation. The pulmonary artery from the right heart takes the deoxygenated blood to the lungs through the pulmonary capillary, and gas exchange takes place between alveoli and pulmonary capillary. Then, the blood becomes oxygenated and goes to the left heart through the pulmonary artery. 4. What is the Lower Respiratory Tract Sterile? Answer: Thе lоwеr rеѕріrаtоrу tract of the healthy individuals has been соnѕіdеrеd as a sterile еnvіrоnmеnt, where thе presence оf any type of bасtеrіа, tурісаllу revealed bу culturing, that rерrеѕеntѕ an abnormal and unhealthy ѕtаtе.
https://www.vedantu.com/biology/interplay-of-respiration-circulation-and-metabolism
It’s becoming increasingly clear that chronic dysfunction of mitochondria is another underlying factor that contributes to poor brain function and mental illness. Mitochondria are unique structures within every cell of your body. You have trillions and trillions of them, making up approximately 10% of your total body weight. They are considered the “powerhouses of the cell,” generating most of the energy in your body by converting your nutrition into adenosine-5’- triphosphate (ATP). ATP is your body’s main source of cellular fuel. You are constantly using it, and your brain needs enough of it to work properly (106-107). Along with your gut bacteria, your mitochondria are critically important and need to be supported to overcome depression and anxiety, and reach optimal brain and mental health. Mitochondria are especially abundant in your brain cells and involved in many important biological processes in the brain, including the regulation of free radicals and neurotransmitters. In fact, monoamine oxidase (MAO), the enzyme responsible for the metabolism of monoamine neurotransmitters, is localized within the outer mitochondrial membrane (91-93). So not surprisingly, numerous studies show that there is a correlation between impaired mitochondrial functioning in the brain and many psychiatric and neurodegenerative diseases, including bipolar disorder, major depressive disorder, multiple sclerosis, Parkinson’s disease, Alzheimer's disease, chronic fatigue syndrome, schizophrenia, psychosis, panic disorder, social anxiety, generalized anxiety and other stress-related diseases (82-90, 94-100, 102-104). Yes, you read that right. Every single one of those conditions has been linked to mitochondrial dysfunction. In fact, many researchers are convinced that mitochondrial dysfunction is involved in almost every chronic disease (108-110). Mitochondria dysfunction decreases ATP energy production and increases oxidative stress, which are commonly found in the brains of people suffering from brain and mental health disorders. Cognitive symptoms of mitochondrial dysfunction can also include impairments in attention, executive function and memory. Unfortunately, a number of psychiatric drugs damage the mitochondria and worsen dysfunction (105). But luckily, there are ways to halt and reverse mitochondrial decay. Below are a number of strategies I’ve used over the years to support my mitochondria and you can use them to regain optimal brain and mental health. Not surprisingly, eating lots of fresh, nutrient-dense whole foods is the most impactful action you can take to power your mitochondria. In order to thrive, your mitochondria need phytonutrients, antioxidants, healthy fats and proteins. Dr. Terry Wahls, MD, clinical professor of medicine at the University of Iowa, is a leading expert on the relationship between nutrition and mitochondrial health. She was diagnosed with multiple sclerosis (MS) more than a decade ago but reversed the neurodegenerative brain disease by repairing her mitochondria with an intensive nutritional strategy. She outlines how she recovered her health in her book The Wahls Protocol. Research on her protocol shows that patients witness a “significant improvement in fatigue” (67). She recommends eating six to nine cups of vegetables and fruits every day, including green veggies (kale, spinach), brightly colored vegetables (beets, carrots, peppers), and sulfur-rich veggies (broccoli, cauliflower). My Free Grocery Shopping Guide for Optimal Brain Health also contains a bunch of foods that you should be eating on a regular basis for optimal mitochondrial health. Dr. Wahls also has a fascinating TED talk that you can watch below if you're interested in learning more. Eating poor-quality foods can also wear down your mitochondria. Genetically, your mitochondria were not designed to deal with our current food environment and lifestyle habits. On top of this, your mitochondria are expected to perform proficiently for much longer, as our ancestors rarely lived to the age of 80. That’s why you should avoid refined sugars, processed flours, industrial oils and trans fats. They can damage your mitochondria and prevent them from properly producing energy. Dr. Wahls also recommends you avoid all gluten, dairy and soy products for optimal mitochondrial health. I feel much better avoiding them completely. Healthy fats, including omega-3 fatty acids, help build and strengthen the membranes of your mitochondria. They’ve also been shown to improve mitochondrial functioning in brain (5-7). That’s why Dr. Wahls recommends eating organic grass-fed beef or wild-caught fish, such as salmon, every day. Avocados, nuts, seeds, coconut and olive oil are also rich in healthy fats. Supplementing with krill oil is another option. I’ve discussed the overwhelming benefits of krill oil before here. Not surprisingly, exercise strengthens your mitochondria by increasing oxygen and blood flow and activating biochemical pathways that produce new mitochondria (8). Runners have more high-functioning mitochondria than non-runners, and strength training and high-intensity interval training also increase the number of mitochondria and improve the efficiency of your existing mitochondria (9, 10). Many experts recommend exercise for brain health, and as I’ve mentioned before, it can also increase brain-derived neurotrophic factor (BDNF), your brain’s growth hormone. Low-level laser therapy (LLLT) is a treatment that uses low-level (low-power) lasers or light-emitting diodes (LEDs) to stimulate brain cells, helping them heal and function better. There is strong evidence to suggest that LLLT supports the mitochondria. Research shows that it reduces oxidative stress and increases the production of ATP energy in mitochondria (39, 40). These mitochondrial benefits have also been seen directly within the brain. Studies show that LLLT increases mitochondrial activity within brain cells, and this leads to beneficial effects in behaviour (41). On top of all this, LLLT treatment has been shown to increase the number of mitochondria and mitochondrial oxygen usage within the brain (42, 43). Frankly, it’s ridiculous that this therapy is not more well-known and promoted by doctors. But if you’ve read my blog for a while now, I’m sure you understand why. You don’t have to wait for conventional medicine to catch up, and you can experiment with it yourself since it’s known to be very safe (44). Platinum Therapy Lights Bio-450 (Combo Red/NIR) - This is a powerful all-one-device that shines 660 nm of red light and 850 nm of infrared light. I shine it on my forehead for 5-10 minutes every day or every other day. I also shine it on other parts of my head, and on my thyroid, thymus gland and gut. If you decide to get this device, you can use the coupon code OPTIMAL for a 5% discount. Vielight 810 – This is an intranasal device with 810 nm of near infrared light that I use regularly. It penetrates deeper into brain tissue and is absorbed better by the central nervous system. If you decide to get this one, you can use the coupon code JORDANFALLIS for a 10% discount. Some research has shown a 20-fold higher efficiency of light delivery to the deep brain through the nose instead of transcranial application (125). You can learn more about LLLT in this post. Infrared saunas are another excellent way to expose yourself to infrared light. Check out my post about the benefits here. And you should also limit your exposure to artificial blue light, as it can also wear down your mitochondria. You can learn more about the risks of too much blue light in this post. Resveratrol is a beneficial antioxidant compound found in grapes and red wine. Not only does it increase BDNF levels, but it also activates the SIRT1 gene. This gene triggers a number of positive biochemical reactions that protect and improve the functioning of your mitochondria. Caloric restriction and intermittent fasting also trigger the SIRT1 gene (11, 12, 13). In 2006, Harvard researchers found that resveratrol may increase lifespan by protecting the mitochondria (14). That’s why I take this resveratrol on a regular basis and will continue to do so for the rest of my life. Restricting your calories is one the best actions you can take to improve mitochondrial function. Studies show that eating less food reduces the demand and damage on your mitochondria. But reducing calories is tough to do and absolutely no fun. That’s why I intermittent fast instead. Fasting activates your mitochondria and triggers autophagy, which is an intracellular process that essentially allows the mitochondria to clean themselves by removing unwanted and damaged debris, proteins and reactive oxygen species (1, 2, 4). This process has been shown to reduce the risk of cancer, Parkinson’s disease and Alzheimer’s disease (3). NADH is a naturally-occurring compound found in the cells of all living organisms. It plays a key role in the production of energy within the cell and is highly concentrated within your mitochondria (45). Depletion of NADH has been linked to a number of diseases, including depression, chronic fatigue syndrome, Alzheimer’s and Parkinson’s, and stabilized oral NADH has been shown to improve all of these conditions (46, 47, 48). Although I don’t take it anymore, I’ve witnessed a beneficial effect from supplementing with this NADH through Amazon. LLLT also increases NADH in your mitochondria. A ketogenic diet is a very low-carb diet. When you restrict carbohydrate-rich foods, your body enters ketosis, a metabolic state in which your body and brain run on fatty acids and “ketones” instead of glucose (36). Ketones are an alternative source of energy for your brain cells and their mitochondria. When your mitochondria are dysfunctional, following a ketogenic diet can be an effective strategy to fuel the mitochondria. Ketogenic diets may help treat many different brain and mental health diseases including Alzheimer’s, Parkinson’s, epilepsy and autism. Exogenous ketones can help you get into ketosis quickly. I take Optimal Ketones, and it immediately increases my mental clarity (even when I'm eating carbohydrates). All of the B vitamins play an essential role in maintaining mitochondrial function, and your mitochondria will be compromised if you have a deficiency of any B vitamin (37). Deficiency is more likely if you take certain medications. I take this B complex. It includes the bioactive forms of all of the B vitamins. Ribose is a five carbon sugar created naturally by your body. Even though it’s a sugar, research suggests it does not raise blood sugar levels. Instead, your body stores it in the mitochondria (49, 50). Ribose is used by the mitochondria to produce ATP and if you don’t have enough, you’ll experience low energy (51). Chronic stress can deplete ribose, and certain conditions have been linked to chronic ribose deficiency, including depression and chronic fatigue syndrome. That’s why I recommend people supplement with ribose if they struggle with these disorders because it can help reduce mental and physical lethargy (52, 53). I don’t take it every day, but I do cycle this ribose with other mitochondrial enhancers. Coenzyme Q10 (CoQ10) is an antioxidant molecule found in every cell of your body. It’s particularly concentrated in the mitochondria, playing a key role in the production of energy and protecting the mitochondria from oxidative damage. Without CoQ10, your body cannot synthesize ATP because CoQ10 is an essential component of the mitochondrial electron transport chain. Many doctors are unaware that CoQ10 is an excellent treatment for many brain health issues, including depression, chronic fatigue, and Alzheimer’s disease. Low levels of CoQ10 can cause brain fog, mental fatigue, difficulty concentrating, memory lapses, depression and irritability (68-70). Researchers have found that CoQ10 levels are significantly lower in the depressed patients (71). Unfortunately, chronic oxidative stress and medications can further deplete CoQ10. But supplementing with CoQ10 can increase your mitochondrial energy production and reduce symptoms of depression and chronic fatigue (71). I took this CoQ10 supplement after coming off psychiatric medication. Ubiquinol is a lipid-soluble form of CoQ10. I haven’t taken it but it is the most active form of CoQ10. If you decide to supplement with CoQ10, you should take it with a healthy fat source such as coconut oil to increase absorption because it is fat soluble. Food sources with high natural concentrations of CoQ10 include organic red palm oil and grass-fed beef heart (72, 73). Pyrroloquinoline quinone (PQQ) is a vitamin-like enzyme and potent antioxidant found in plant foods with a wide range of brain health and mitochondrial benefits. It’s been shown to preserve and enhance memory, attention, and cognition by protecting the mitochondria from oxidative damage and promoting the growth of new mitochondria in the brain (56-59). Since it helps grow new mitochondria, it may help you if you suffer from depression, since fewer mitochondria have been found in people with depression (63). Reactive nitrogen species (RNS) and reactive oxygen species (ROS) cause severe stress on brain cells and mitochondria, and PQQ has also been shown to suppress RNS and ROS (60-62). Researchers have found that supplemental PQQ can be neuroprotective by increasing mitochondrial activity levels (64-66). I recommend taking 10-20 mg each day along with CoQ10, as they are synergistic. Taking them together leads to further improvements in cognitive function (57). It's also included in this supplement. Check out the “Neuroprotective” section of the PQQ Wikipedia page for more information on the brain health benefits of this compound. Magnesium is a vital mineral within your body, and the mitochondria are considered magnesium “storage units” because they hold onto a lot of your body’s magnesium. Magnesium protects the mitochondria and plays a role in the production and transfer of ATP within the mitochondria. And research shows that if you have a deficiency in magnesium, your brain cells will have fewer mitochondria, and they will be less healthy (54, 55). This is just another reason to supplement with at least 200 mg of magnesium every day. It’s one of the most important nutrients for optimal brain health. I take this one through Amazon. Acetyl-Carnitine (ALCAR) is an acetylated form of the amino acid carnitine. Carnitine is an amino acid that improves mitochondrial activity and plays an important role in energy production by transporting fatty acids directly into the mitochondria of your brain cells. It is required to produce ATP and deficiencies are associated with reduced mitochondrial function in the brain (74). Supplementing with ALCAR makes it easier for fatty acids to cross your blood-brain barrier and nourish the mitochondria within your brain. This can improve your mood, memory and energy levels. Several studies show that ALCAR eases depressive symptoms and improve quality of life in patients with chronic depression (75-78). And individuals with autism often have reduced levels of carnitine within their brain (79). ALCAR is also synergistic with Alpha Lipoic Acid (ALA), meaning that when you take them together, they are more effective at supporting the mitochondria in your brain. ALA is a mitochondrial enzyme and antioxidant. It is fat soluble and can easily cross your blood-brain barrier. It’s been shown to improve cognition by reducing oxidative stress, and protecting existing mitochondria and creating new mitochondria in the brain (80, 101). Paying attention to your mitochondria is crucial for optimal brain and mental health, and luckily there are a number of dietary and lifestyle habits that can protect and support mitochondrial function. Over time, if you follow these strategies, you can improve your mitochondrial health and naturally restore your mood and energy levels. Please share this post with one of your friends or family members who you think might benefit from protecting and supporting their mitochondria, because it really is an underappreciated and unknown aspect of optimal brain and mental health.
https://www.optimallivingdynamics.com/blog/tag/better+brain+health
Wishful thinkers are often ingenious in their ability to rationalize the avoidance of exercise. Some worry whether too much exercise would wear them out prematurely. Others question whether an older body really needs to break into a sweat. Recent discoveries, combined with a widely accepted theory of aging, clearly counter these rationalizations and document how exercise initiates a coordinated series of responses by the cells of the body culminating in greater strength, energy and stamina. The mitochondrial theory of aging is widely accepted. It states that cumulative damage to the mitochondria, the power plants in each cell, contributes to physical decline, a wide variety of degenerative conditions, and ultimately to cell death. Maintaining mitochondrial health is therefore essential to successful aging. This is where exercise comes in. Here’s how it works. Recent research with animals has demonstrated that it is possible, at least with some cells, to artificially increase the production of mitochondria without exercise. The authors demonstrated that they could increase the production of mitochondria by introducing a gene to overproduce one of the key regulators of mitochondrial biogenesis, known as CaMK. This research may lead in the future to the production of a drug capable of stimulating CaMK synthesis and consequently mitochondrial production, without exercise. Research on another cellular nutrient, acetyl L-carnitine, shows encouraging results regarding dietary supplementation to support mitochondrial health. Details in next month’s issue. Exercise, especially aerobic exercise (running, swimming), burns oxygen and consumes fuel (glucose) at a faster rate than can be supplied to the muscle tissue. The muscle responds to this oxygen-nutrient deficit by activating numerous cellular genes to correct the condition. One biochemical factor, HIF-1 (hypoxia inducible factor-1) is activated when the oxygen present in the tissues falls below a certain level, as occurs during strenuous exercise. HIF-1 in turn initiates a cascade of cellular events at the gene level that ultimately promotes the building of new energy supply routes (blood vessels), as well as an increased number of oxygen-carrying red blood cells. The greater the demand (the harder you work the muscle), the greater the size of the newly constructed vascular network. The worked tissues now have a sufficient supply of fuel to support the new energy demands placed on them. However, yet another important event must occur before the muscle cells can take full advantage of the increased nutrient supply. Rebuilding muscle tissue requires energy, the major source of which is the mitochondria. The more mitochondria a cell possesses, the more capable it is of repairing and rebuilding tissue. Furthermore, the mitochondria impart stamina to the body. The worked muscle therefore requires more of these energy generators, to utilize the increased nutrient supply and build a stronger muscle. Each cell typically carries between 400 and 4,000 mitochondria. The number actually increases in response to exercise. How does this occur? When a muscle is worked, as occurs with running or lifting weights, it converts the energy stored in the chemical molecule ATP to mechanical energy – e.g., muscle contraction – as well as specific biochemical changes in the cell. The cell is equipped with a sensor that carefully monitors these exercised-induced cellular changes, and it responds by promoting the activation of key regulators of mitochondrial biogenesis. These regulators turn on the multiple genes required for construction of new mitochondria. So the worked muscle now has both an increased supply of nutrients as well as more powerhouses to convert the nutrients to energy for new muscle synthesis. The early portion of our lifespans is characterized by vigorous growth of cells and cellular components, such as mitochondria. As we age, degenerative processes occur as a result of an accumulation of errors induced by free radical attacks on key cellular components, including genes, mitochondria and other cellular structures. The consequence of these attacks is old, worn-out and distorted cellular molecules (cellular garbage). The cell must remove the impaired molecules, which interfere with normal cellular activity. This process requires energy, and exercise helps the cell produce the energy required to remove the garbage, as well as inhibit the production of cellular garbage. Although these degenerative processes are inevitable (at least at present), they can be attenuated by a healthy life-style and, most emphatically, exercise. The loss of mitochondria is a hallmark of the aging process, one that manifests itself as a loss of mental clarity, strength, vigor and endurance. Exercise, via the mechanism described above, can offset these effects by replenishing the cellular mitochondria, restoring the cell to a more youthful state. Thus we see that regular physical activity results in a chain of cellular benefits. Energy production, blood flow and cellular efficiency all increase. Regular physical activity promotes leanness by stimulating the burning of fat for energy. We feel these benefits in the form of greater strength and stamina, and often an enhanced sense of well-being and mental clarity. positive effects of exercise on aging bodies. QUESTION: I’m very active, exercise a lot and participate in competitive events that are physically demanding. Will the Juvenon product provide additional protection to my health? ANSWER: Strenuous physical exercise has a positive effect on overall health. However, it does increase the production of oxidants that can damage tissue. It also places increased demand on the mitochondria to produce more energy. The body responds to the stress by increasing its oxidant defense system and producing more mitochondria. Nevertheless, it could use additional help. Juvenon Energy Formula™ contains compounds that support the body’s oxidant defense system. These compounds have been demonstrated to promote the production of a key structural component (cardiolipin) in the mitochondria. Cardiolipin functions as a scaffold for the mitochondrial machinery involved in energy production. An inadequate level of this component will result in lower energy. Thus, although the body responds to oxidant stress by increasing its antioxidant defense, the compounds present in the Juvenon formula add additional protection.
https://juvenon.com/exercise-gene-activator-for-health-and-strength-403/
Most likely you’ve learned that the “mitochondria is the powerhouse of the cell”. Although mitochondria are cellular organelles that produce energy, this phrase undervalues its many vital functions. Mitochondrial health may be the secret key to optimal health, energy, and longevity. Keep reading to learn how you can protect and maximize your mitochondria.Mitochondrial health may be the secret key to optimal health, energy, and longevity. Table of Contents - Mitochondria Definition - Mitochondria Function - Mitochondrial Theory of Aging - How to Maximize Mitochondria - Mitochondria Diet - Mitochondria is the powerhouse of the cell - Caloric Restriction and Mitochondria - Sleep and Mitochondria - Inflammation and Mitochondrial Dysfunction - Exercise and Mitochondria - mTor and Mitochondria - Conclusion This post may include affiliate links to some of my favorite products that I use and recommend. Read my full affiliate disclosure here. Mitochondria Definition Mitochondria are cellular organelles that produce energy through aerobic respiration. Although mitochondria have many vital functions, they are best known as the power plants of the cell. They are supremely important for your brain and your muscles as they use more energy than the rest of your organs. Mitochondria are like cellular batteries that need to be constantly charged through respiration. They supply every cell, tissue, and organ in your body with energy. For example, your brain burns more energy than any other organ. Therefore your brain cells have a large number of mitochondria. The health and strength of your body at any given time depends on the health of your mitochondria. If you want to increase your strength and energy potential you need to protect your mitochondria. Mitochondria Function - Production of energy (ATP Synthesis) through aerobic respiration (1) - Production of heat: (non-shivering thermogenesis) - Independent units within eukaryotic cells with Mitochondrial DNA (mtDNA) - Plays a role in apoptosis (programmed cell death) needed to recycle useless or harmful cells - Storage of Calcium Ions which have many vital functions including signal transduction, neurotransmitter release, and contraction of muscle cells Mitochondrial Theory of Aging and Disease The Mitochondrial theory of aging proposes that accumulated oxidative damage to mitochondrial DNA (mtDNA) is a primary element of aging. Proponents of this theory argue that free radicals cause mitochondrial dysfunction, the decline and death of mitochondria. Simply, mitochondrial dysfunction causes chronic disease and aging. Free radicals are a byproduct of energy production. Ironically, mitochondria are vulnerable to the oxidative damage caused by the free radicals they produce. Unfortunately, both the quality and quantity of your mitochondria decline as you age. The symptoms of mitochondrial dysfunction include loss of energy, fat storage, decreased muscle mass, and cognitive decline. Mitochondrial DNA (mtDNA) is more vulnerable to free radicals than the DNA in the nucleus. This is because antioxidants can’t penetrate the mitochondrial membrane to enter the mitochondria. The antioxidants found in the diet do not directly protect your mitochondria. Instead, your body produces a mitochondrial protecting enzyme, superoxide dismutase. How to Maximize Mitochondria Superoxide dismutase (SOD), is a detoxifying enzyme within your mitochondria. It neutralizes superoxide by converting them back into oxygen. It can even neutralize free radicals on contact. SOD serves other vital functions. SOD may act (2) as a tumor-suppressor gene, thereby preventing cancer. It may also protect against (3) dementia. SOD can increase lifespan. For example, the reason women live (4) longer than men could be explained by their superior SOD activity. What is the Best Mitochondria Diet Eating a vegetarian diet can boost SOD activity naturally, thereby slowing the aging process. In a study comparing omnivores to vegetarians, the vegetarian group had (5) a 300% higher SOD enzyme activity than the omnivore control group! Vegetarians have lower rates of cancer, cardiovascular disease, and live longer than non-vegetarians. The reduced risk is due to epigenetic changes or genetic expression (6). Whereby the foods that you eat can turn on and off certain genes. Regardless of SOD’s role in mitochondrial function, turning on this gene is essential for overall health. Ergothioneine and Mitochondria Ergothioneine (ET) is (7) a unique sulfur-containing derivative of the amino acid histidine. Interestingly, you have a protein in your bodies with the sole purpose of transporting ET from your food to your tissues. ET is concentrated in areas of relatively high oxidative stress such as the eye and liver, bone marrow, and semen. Therefore, researches propose that it is an important part of your physiology. ET is a “cytoprotectant” or cell protector. It can enter the nucleus and protect your DNA and enter and protect your mitochondria. This makes ET a valuable and potent intro-mitochondrial antioxidant. You can only get ET from your diet and cells starved of ET weaken. Because of its value, some researches argue that ET is worthy of being named a vitamin. Sources of ET for Mitochondria ET is made by tiny microbes in the soil and is absorbed through the root systems of plants. Mushrooms are the best source with (8) over 40 times more ET than any other food! Beans are a distant second followed by organ meats. Consider adding mushrooms and beans to your diet as they have many other vital nutrients. Although organ meats have some ET, they have other risk factors. Caution: You should cook beans and mushrooms and not all mushrooms are safe to eat. Beet Juice and Nitrates For Mitochondria Nitrates, found in vegetables, are metabolized (9) and converted to nitrite and then to bioactive nitric oxide. Nitric oxide acts as a vasodilator, meaning it opens blood vessels and increases blood circulation. Thereby consumption of nitrates may reduce blood pressure and improve athletic performance. Drinking beet juice can significantly improve athletic performance by enhancing energy production of the mitochondria. A randomized, double-blind, crossover, placebo control study tested the effects of beet juice verses de-nitrated beet juice. The study found a strikingly significant improvement (10) in competitive cycling times after drinking a half liter of beetroot juice. In fact, all 9 of the beetroot supplement group achieved better times. The researchers took muscle biopsies from subjects before and after nitrate supplementation. They found (11) that you can improve mitochondrial efficiency, human energy production, through your diet. The improved mitochondrial function may come from increased oxygen efficiency during exercise. Consider adding nitrate-rich foods like beets and arugula. Mitochondria is the powerhouse of the cell Your body uses oxygen to produce (12) ATP, the body’s energy currency. Every time you flex a muscle or even think, your body uses up ATP. Thereby you must replenish ATP by breathing in more oxygen. ATP synthase is a microscopic enzyme deep within the cell that produces ATP. This microscopic enzyme is similar to a rotary mechanical motor. Oxygen generates a flow of protons, like the flow of a water wheel, that causes the enzyme to turn and make ATP. Like any motor, this process is inefficient. Gears can slip and protons can leak out the edges. Nitric Oxide and Mitochondria Energy Production Beets are one of the most abundant sources of dietary nitrates. Your stomach absorbs (13) dietary nitrates, actively concentrates them and sends them back to your mouth through your salivary glands. Your tongue has special bacteria that convert nitrates into nitrites. Then, these nitrites are re-swallowed and re-absorbed. Lastly, nitrites are sent to your cells where they are converted into nitric oxide. Nitric oxide improves the efficiency of the proton pump by reducing slippage, plugging holes, or even taking the place of oxygen. Thereby beets, rich in nitrates, can reduce oxygen cost, increase oxygen efficiency, and improve athletic performance. Oral Bacteria and Nitrates The natural flora on your tongue bioactivate (14) nitrate by breaking it down into more reactive nitrite. A study of seven healthy volunteers found that using an antibacterial mouthwash prevented nitrate from being converted to nitrite in the saliva and slowed the rise in plasma nitrite. Therefore, nitrates bioactivation into plasma nitrites greatly depends on nitrate conversion by the natural tongue flora. Antibacterial mouthwash can negate the NO-dependent biological benefits of dietary nitrate. A Mitochondrial Safe Mouthwash Green tea is a natural alternative to antibacterial mouthwash. Researchers found (15) that green tea was more effective than chlorhexidine at reducing plaque. Using green tea as a mouthwash may be safer, cheaper, and better than what you can buy in the store. Amla berry powder is another mitochondrial protective alternative. Research has shown that not only does amla berry kill off cavity-causing bacteria, it can also suppress (16) the bacteria’s plaque forming abilities. Additionally, amla berries have over 200 times the amount of antioxidants as blueberries! You can learn more about the organic amla berry powder I use every day check out the current price on Amazon. Helpful tip: replace your mouthwash with amla berry powder or green tea powder mixed in water. Sulfurophane and Mitochondria Sulfurophane can double (17) the mass of mitochondria in human cells growing in a Petrie dish. Sulforaphane is a potent antioxidant that is found in abundance in cruciferous vegetables such as broccoli, kale, cabbage, and cauliflower. Sulfurophane is most potent in broccoli sprouts that are cheap and easy to grow. Caloric Restriction and Mitochondria Caloric restriction can slow down aging and extend lifespan. Whereby, when food is plentiful, your cells divide. But when food is scarce, your body goes into conservation mode, slows down cell division, and starts (18) the process of autophagy. Autophagy occurs when your body decides that there isn’t enough food and it starts searching your cells for anything unnecessary. Your body recycles broken or useless cells into new or improved cells. While you are fasting, mitochondria are recycled into healthy and productive mitochondria. Sleep and Mitochondrial Health Sleep disorders can cause mitochondrial dysfunction. Even undiagnosed sleep disorders, like not getting quality sleep every night, can weaken (19) your mitochondria. Your body, especially your brain, does most of its cleansing when you sleep. Your brain uses more energy than any other organ in your body, thereby creating a ton of waste. It cleanses itself through the glymphatic system. Channels between neurons expand while brain cells shrink to allow cerebrospinal fluid in to flush out dead cells, toxins, wastes, and byproducts. This process is 10 times more effective when you are asleep than when you are awake. Insufficient waste removal from a lack of quality sleep leads to inefficient mitochondria. Thereby reducing your health and energy potential and aging you faster. Inflammation and Mitochondria Dysfunction Mitochondrial dysfunction can be (20) both a cause of and an effect of inflammation. Inflammation is your body’s healing process in response to stress. Mitochondrial dysfunction prompts your body to make metabolic adaptions that are protective in the short-term. Nevertheless, prolonged adaptations can have negative consequences. Consequently, chronic inflammation is a factor in many diseases and speeds up the aging process. Ways to reduce Inflammation include: - Intermittent fasting is a form of stress that sends your body into autophagy thereby recycling damaged cells into new ones. Less damage means less inflammation. - Omega-3 fats reduce inflammation while omega-6 fats can cause inflammation. Ground flax seeds and chia seeds are excellent sources of short-chain (ALA). But algae is the safest and most effective source of long-chain (DHA and EPA) omega-3s. Even fish get their omega-3s from algae, - Avoid fish as even wild fish are loaded with toxic heavy metals that cause inflammation. - Reduce Your toxic load: eliminate or limit environmental toxins and eat healthy organic food in their natural form. And add anti-oxidant rich foods to every meal to handle the oxidation of consuming and burning energy. - Nitrates: nitrates convert into nitric oxide which can improve your oxygen efficiency and reduce free radical byproducts. If you want to learn about the vegan algae oil capsules I use, click here.to.check out the current pricing on Amazon. Exercise and mitochondria When you exercise, you are asking your mitochondria to produce more energy. Your body responds by making more mitochondria to adapt to your needs. One study found that 12 weeks on an exercise program can significantly improve (21) mitochondrial function in skeletal muscle cells. Mitochondria and mTOR mTor is an enzyme that controls the rate at which you age. When mTOR activity is (22) inhibited, Mitochondria lengthen. When TOR is activated, mitochondria become fragmented. Progressive fragmentation of mitochondria is a possible risk factor for certain cancers. Learn more about how to slow down mTOR, the engine of aging, in How Can You Slow Down Aging? mTOR Limits to the Mitochondrial Theory of Aging Evidence supporting the Mitochondrial theory of Aging is mostly correlative, thus the theory remains unproven. For example, research has found that animals that produce (23) fewer free radicals have less oxidative damage in their tissues. Although this association appears to be promising it raises more questions. Paradoxically, the longest-living rodent produces high levels of free radicals and sustains exceptionally high oxidative damage levels in its proteins, lipids, and DNA. The Mitochondrial Theory of Aging is built on the premise that free radicals damage mitochondrial DNA (mtDNA). Therefore, damage to mtDNA should have a negative correlation with maximum lifespan. In contrast with the mitochondrial theory, high levels of oxidative damage in mtDNA does not shorten the lifespan in mice. Caloric restriction is the only natural treatment that is proven to increase mean and maximum lifespan in mammals. In fact, caloric restricted animals produce far less mitochondrial reactive oxygen species (mtROS), which supports the mitochondrial theory. Nonetheless, caloric restriction has other benefits including decreasing insulin signaling. Therefore, the increase in longevity cannot be attributed entirely to mitochondrial protection. The evidence supporting the mitochondrial theory of aging is somewhat contradictory and still inconclusive. Therefore, the mitochondrial theory remains just a theory. Conclusion: Thoughts on Mitochondria - Improving mitochondrial function appears to increase energy production and promote health. - The factors that promote mitochondrial health independently promote overall health. - There are limits to animal trials, we need more evidence from human studies. - Nevertheless, I am not going to wait for scientific proof before I make healthy changes. - If you give your body a goal to focus on, it will do all that it can to help you. Paradoxically, If you want more energy, you need to use up what you have. - Your body follows your lead. Whatever changes you make, good or bad, it will try to adapt. - If you want to learn more, do your own research, comment, or ask a question. - What about mitochondria interests you? Mitochondria Action Plan - Avoid toxins in the environment - Minimize or cut animal products from your diet - Eat plenty of naturally antioxidant-rich foods with every meal - Eat plenty of foods rich in omega-3s and reduce omega-6. - Avoid free radical promoting substances like food preservatives, artificial substances, alcohol, and tobacco - Use green tea or amla berry powder as a natural and effective mouthwash - If you want more energy, get moving My Favorite Health Products Head over to my secret weapon resources page to learn about the products and supplement I use every day. Follow me on Pinterest for interesting health tips and remedies. Related Posts to Mitochondria - How to Have More Energy During the Day - How to Regrow Telomeres: Latest Findings - What is a Good Resting Heart Rate for My Age?
https://agelessinvesting.com/Mitochondria-is-the-powerhouse-of-the-cell/
This is a colored transmission electron micrograph (TEM) of a mitochondrion. Cells are the basic components of living organisms. The two major types of cells are prokaryotic and eukaryotic cells. Eukaryotic cells have membrane-bound organelles that perform essential cell functions. Mitochondria are considered the "power houses" of eukaryotic cells. What does it mean to say that mitochondria are the cell's power producers? These organelles generate power by converting energy into forms that are usable by the cell. Located in the cytoplasm, mitochondria are the sites of cellular respiration. Cellular respiration is a process that ultimately generates fuel for the cell's activities from the foods we eat. Mitochondria produce the energy required to perform processes such as cell division, growth, and cell death. Mitochondria have a distinctive oblong or oval shape and are bounded by a double membrane. The inner membrane is folded creating structures known as cristae. Mitcohondria are found in both animal and plant cells. They are found in all body cell types, except for mature red blood cells. The number of mitochondria within a cell varies depending on the type and function of the cell. As mentioned, red blood cells do not contain mitochondria at all. The absence of mitochondria and other organelles in red blood cells leaves room for the millions of hemoglobin molecules needed in order to transport oxygen throughout the body. Muscle cells, on the other hand, may contain thousands of mitochondria needed to provide the energy required for muscle activity. Mitochondria are also abundant in fat cells and liver cells. Mitochondria have their own DNA, ribosomes and can make their own proteins. Mitochondrial DNA (mtDNA) encodes for proteins that are involved in electron transport and oxidative phosphorylation, which occur in cellular respiration. In oxidative phosphorylation, energy in the form of ATP is generated within the mitochondrial matrix. Proteins synthesized from mtDNA also encode for the production of the RNA molecules transfer RNA and ribosomal RNA. Mitochondrial DNA differs from DNA found in the cell nucleus in that it does not possess the DNA repair mechanisms that help prevent mutations in nuclear DNA. As a result, mtDNA has a much higher mutation rate than nuclear DNA. Exposure to reactive oxygen produced during oxidative phosphorylation also damages mtDNA. Mitochondria are bounded by a double membrane. Each of these membranes is a phospholipid bilayer with embedded proteins. The outermost membrane is smooth while the inner membrane has many folds. These folds are called cristae. The folds enhance the "productivity" of cellular respiration by increasing the available surface area. Within the inner mitochondrial membrane are series of protein complexes and electron carrier molecules, which form the electron transport chain (ETC). The ETC represents the third stage of aerobic cellular respiration and the stage where the vast majority of ATP molecules are generated. ATP is the body's main source of energy and is used by cells to perform important functions, such as muscle contraction and cell division. The double membranes divide the mitochondrion into two distinct parts: the intermembrane space and the mitochondrial matrix. The intermembrane space is the narrow space between the outer membrane and the inner membrane, while the mitochondrial matrix is the area that is completely enclosed by the innermost membrane. The mitochondrial matrix contains mitochondrial DNA (mtDNA), ribosomes, and enzymes. Several of the steps in cellular respiration, including the Citric Acid Cycle and oxidative phosphorylation occur in the matrix due to its high concentration of enzymes. Mitochondria are semi-autonomous in that they are only partially dependent on the cell to replicate and grow. They have their own DNA, ribosomes, make their own proteins, and have some control over their reproduction. Similar to bacteria, mitochondria have circular DNA and replicate by a reproductive process called binary fission. Prior to replication, mitochondria merge together in a process called fusion. Fusion is needed in order to maintain stability, as without it, mitochondria will get smaller as they divide. These smaller mitochondria are not able to produce sufficient amounts of energy needed for proper cell function. Nucleus - houses DNA and controls cell growth and reproduction. Ribosomes - aid in the production of proteins. Golgi Complex - manufactures, stores, and exports cellular molecules. Peroxisomes - detoxify alcohol, form bile acid, and break down fats. Cytoskeleton - network of fibers that support the cell. Cilia and Flagella - cell appendages that aid in cellular locomotion. Encyclopædia Britannica Online, s. v. "mitochondrion", accessed December 07, 2015, http://www.britannica.com/science/mitochondrion. Cooper GM. The Cell: A Molecular Approach. 2nd edition. Sunderland (MA): Sinauer Associates; 2000. Mitochondria. Available from: http://www.ncbi.nlm.nih.gov/books/NBK9896/.
https://www.thoughtco.com/mitochondria-defined-373367
Posted by: Glow-worm PJ9 FEB 2012 Parkinson’s disease attacks the substantia nigra of the brain, which is responsible for the control of movement. In Parkinson’s sufferers, damage to the mitochondria in the dopaminergic neurones of the brain, along with a build-up of harmful by-products, causes depletion of these neurones. There has been some uncertainty as to whether mitochondrial damage is a cause or a consequence of Parkinson’s disease, but research into two genes known to be involved in the disease, called parkin and PINK1, has supported the idea that mitochondrial damage plays a causative role in the disease. Cells deficient in either protein have abnormal mitochondria. The parkin gene is recruited to damaged mitochondria to aid their destruction, but the process is dependent upon the presence of PINK1. It was found that giving the cells extra PINK1 led to increased parkin recruitment and mitochondrial destruction. Other research studied the effect of parkin on larvae of the fruit fly Drosophila. They showed a marked decrease in speed, and slower muscle contractions, reminiscent of bradykinesia. It was also discovered that levels of adenosine triphosphate, and therefore energy production, was decreased, with an increase in lactate levels. The parkin larvae also showed oxidative stress caused by high levels of free radicals. Further research carried out at Cambridge University has centred upon the fact that viruses often stabilise mitochondria in cells they infect in order to increase the cell’s chances of survival as a host. A viral protein was injected into rats with Parkinson’s-like brain lesions, and it was discovered they performed better in tests involving motor function, and their brains were found to contain more dopaminergic neurones. Much remains unclear about the role of neuronal mitochondria in the development of Parkinson’s disease, but it gives hope for future management of the disease. Have your say For commenting, please login or register as a user and agree to our Community Guidelines. You will be re-directed back to this page where you will have the ability to comment.
https://www.pharmaceutical-journal.com/news-and-analysis/opinion/blogs/parkinsonism-hope/11094799.blog?firstPass=false
"Urolithin A is the only known molecule to activate mitophagy that has been shown to be safe and effective in rigorous placebo-controlled human clinical studies focused on mitochondrial health and muscle.” ~Prof. Dr. med. Johann Auwerx Urolithin A is a small molecule found in certain foods that has recently gained attention for its potential to support healthy aging and improve physical performance. This exciting compound is generated by the breakdown of ellagitannins, which are plant compounds found in a variety of fruits and nuts, including pomegranates, raspberries, and nuts such as walnuts and almonds. What does Urolithin A do? One of the key mechanisms through which urolithin A appears to exert its effects is by increasing mitophagy (mitochondrial biogenesis) and supporting the health and function of mitochondria, the energy-producing structures found in every cell of the body. Mitochondria are responsible for generating the majority of the energy that cells need to function, and their health and function decline with age. This decline in mitochondrial function is thought to play a key role in the aging process and may contribute to a range of age-related conditions, including decreased physical performance and increased risk of chronic diseases. Urolithin A has been shown to support the health and function of mitochondria in a number of ways. For example, it has been shown to stimulate the production of new mitochondria, a process known as mitochondrial biogenesis. It has also been shown to increase the efficiency of mitochondria, helping them to produce more energy while using less oxygen. In addition, urolithin A has been shown to protect mitochondria from oxidative stress and inflammation, which can damage these important structures and contribute to their decline. Urolithin A has been shown to exert anti-aging effects, increased mitochondrial activity and muscle function, potentially due to its mitophagy inducing and antioxidant effects. The potential benefits of Urolithin A are not limited to its effects on mitochondria. This compound has also been shown to have anti-inflammatory and antioxidant properties, which may help to support overall health and well-being. In addition, urolithin A has been shown to support the health of the gut microbiome, the community of microorganisms that live in the digestive tract. A healthy gut microbiome is important for a number of reasons, including supporting immune function and helping to maintain a healthy weight. So, how can you get more urolithin A in your diet? While urolithin A is found naturally in certain foods, the amount present in these foods is typically quite small. However, the body can convert the ellagitannins found in these foods into urolithin A, provided that certain gut microbes are present. Conversion rates vary from person to person. Upon consumption of pomegranate juice, for example, compounds known as ellagitannins are broken down in the stomach and transformed by intestinal bacteria into urolithin A. This biotransformation has been shown to vary widely across individuals, with some showing high or low conversion rates, while others are unable to perform the conversion at all. Mitopure Urolithin A In addition to dietary sources, urolithin A is also available in supplement form. The Mitopure brand of Urolithin A is a supplement produced by Timeline Nutrition that is made from a purified form of the compound. It is available in capsule form and is designed to provide a convenient and consistent way to get the potential benefits of this compound. The dosage of each capsule is clearly marked, and it is generally recommended to start with a lower dosage and gradually increase as needed. Mitopure is a highly pure form of Urolithin A, a postbiotic clinically shown to energize cells, increase muscle strength and improve endurance. And this is just the beginning. New studies continuously explore and prove the incredible potential of Urolithin A. Few people can get enough Urolithin A from diet alone. Timeline Nutrition’s Mitopure unlocks 6X the dose of Urolithin A when compared to dietary sources such as pomegranate juice, without the sugar. Mitochondria, our cellular powerhouses, are constantly renewed to fulfill the vast energy demands of cells. As we age, mitochondrial function declines - starting as early as in our 30s. Mitopure stimulates the mitochondrial renewal process to protect cells from age-associated decline. The Mitopure brand of Urolithin A is a reliable and trustworthy choice for those looking to incorporate this exciting compound into their wellness routine. Whether you are interested in supporting healthy aging, improving physical performance, or simply looking to support your overall health and well-being, the Mitopure brand of Urolithin A is worth considering. If you decide to give Mitopure a try, use code KETOBRAINZ at check out to save! Visit Timeline Nutrition to learn more!
https://ketobrainz.com/blogs/news/what-is-urolithin-a
Sort By: Relevance A-Z By Title Z-A By Title A-Z By Author Z-A By Author Date Ascending Date Descending Article Peer Reviewed A precision therapeutic strategy for hexokinase 1-null, hexokinase 2-positive cancers. Xu, Shili Catapang, Arthur Braas, Daniel Stiles, Linsey Doh, Hanna M Lee, Jason T Graeber, Thomas G Damoiseaux, Robert Shirihai, Orian Herschman, Harvey R et al. UCLA Previously Published Works (2018) Background:Precision medicine therapies require identification of unique molecular cancer characteristics. Hexokinase (HK) activity has been proposed as a therapeutic target; however, different hexokinase isoforms have not been well characterized as alternative targets. While HK2 is highly expressed in the majority of cancers, cancer subtypes with differential HK1 and HK2 expression have not been characterized for their sensitivities to HK2 silencing. Methods:HK1 and HK2 expression in the Cancer Cell Line Encyclopedia dataset was analyzed. A doxycycline-inducible shRNA silencing system was used to examine the effect of HK2 knockdown in cultured cells and in xenograft models of HK1-HK2+ and HK1+HK2+ cancers. Glucose consumption and lactate production rates were measured to monitor HK activity in cell culture, and 18F-FDG PET/CT was used to monitor HK activity in xenograft tumors. A high-throughput screen was performed to search for synthetically lethal compounds in combination with HK2 inhibition in HK1-HK2+ liver cancer cells, and a combination therapy for liver cancers with this phenotype was developed. A metabolomic analysis was performed to examine changes in cellular energy levels and key metabolites in HK1-HK2+ cells treated with this combination therapy. The CRISPR Cas9 method was used to establish isogenic HK1+HK2+ and HK1-HK2+ cell lines to evaluate HK1-HK2+ cancer cell sensitivity to the combination therapy. Results:Most tumors express both HK1 and HK2, and subsets of cancers from a wide variety of tissues of origin express only HK2. Unlike HK1+HK2+ cancers, HK1-HK2+ cancers are sensitive to HK2 silencing-induced cytostasis. Synthetic lethality was achieved in HK1-HK2+ liver cancer cells, by the combination of DPI, a mitochondrial complex I inhibitor, and HK2 inhibition, in HK1-HK2+ liver cancer cells. Perhexiline, a fatty acid oxidation inhibitor, further sensitizes HK1-HK2+ liver cancer cells to the complex I/HK2-targeted therapeutic combination. Although HK1+HK2+ lung cancer H460 cells are resistant to this therapeutic combination, isogenic HK1KOHK2+ cells are sensitive to this therapy. Conclusions:The HK1-HK2+ cancer subsets exist among a wide variety of cancer types. Selective inhibition of the HK1-HK2+ cancer cell-specific energy production pathways (HK2-driven glycolysis, oxidative phosphorylation and fatty acid oxidation), due to the unique presence of only the HK2 isoform, appears promising to treat HK1-HK2+ cancers. This therapeutic strategy will likely be tolerated by most normal tissues, where only HK1 is expressed. Article Peer Reviewed Cell cycle-related metabolism and mitochondrial dynamics in a replication-competent pancreatic beta-cell line. Montemurro, Chiara Vadrevu, Suryakiran Gurlo, Tatyana Butler, Alexandra E Vongbunyong, Kenny E Petcherski, Anton Shirihai, Orian S Satin, Leslie S Braas, Daniel Butler, Peter C Tudzarova, Slavica et al. UCLA Previously Published Works (2017) Cell replication is a fundamental attribute of growth and repair in multicellular organisms. Pancreatic beta-cells in adults rarely enter cell cycle, hindering the capacity for regeneration in diabetes. Efforts to drive beta-cells into cell cycle have so far largely focused on regulatory molecules such as cyclins and cyclin-dependent kinases (CDKs). Investigations in cancer biology have uncovered that adaptive changes in metabolism, the mitochondrial network, and cellular Ca2+ are critical for permitting cells to progress through the cell cycle. Here, we investigated these parameters in the replication-competent beta-cell line INS 832/13. Cell cycle synchronization of this line permitted evaluation of cell metabolism, mitochondrial network, and cellular Ca2+ compartmentalization at key cell cycle stages. The mitochondrial network is interconnected and filamentous at G1/S but fragments during the S and G2/M phases, presumably to permit sorting to daughter cells. Pyruvate anaplerosis peaks at G1/S, consistent with generation of biomass for daughter cells, whereas mitochondrial Ca2+ and respiration increase during S and G2/M, consistent with increased energy requirements for DNA and lipid synthesis. This synchronization approach may be of value to investigators performing live cell imaging of Ca2+ or mitochondrial dynamics commonly undertaken in INS cell lines because without synchrony widely disparate data from cell to cell would be expected depending on position within cell cycle. Our findings also offer insight into why replicating beta-cells are relatively nonfunctional secreting insulin in response to glucose. They also provide guidance on metabolic requirements of beta-cells for the transition through the cell cycle that may complement the efforts currently restricted to manipulating cell cycle to drive beta-cells through cell cycle. Article Peer Reviewed The impact of exercise on mitochondrial dynamics and the role of Drp1 in exercise performance and training adaptations in skeletal muscle. Moore, Timothy M Zhou, Zhenqi Cohn, Whitaker Norheim, Frode Lin, Amanda J Kalajian, Nareg Strumwasser, Alexander R Cory, Kevin Whitney, Kate Ho, Theodore Ho, Timothy Lee, Joseph L Rucker, Daniel H Shirihai, Orian van der Bliek, Alexander M Whitelegge, Julian P Seldin, Marcus M Lusis, Aldons J Lee, Sindre Drevon, Christian A Mahata, Sushil K Turcotte, Lorraine P Hevener, Andrea L et al. UC Irvine Previously Published Works (2019) OBJECTIVE:Mitochondria are organelles primarily responsible for energy production, and recent evidence indicates that alterations in size, shape, location, and quantity occur in response to fluctuations in energy supply and demand. We tested the impact of acute and chronic exercise on mitochondrial dynamics signaling and determined the impact of the mitochondrial fission regulator Dynamin related protein (Drp)1 on exercise performance and muscle adaptations to training. METHODS:Wildtype and muscle-specific Drp1 heterozygote (mDrp1+/-) mice, as well as dysglycemic (DG) and healthy normoglycemic men (control) performed acute and chronic exercise. The Hybrid Mouse Diversity Panel, including 100 murine strains of recombinant inbred mice, was used to identify muscle Dnm1L (encodes Drp1)-gene relationships. RESULTS:Endurance exercise impacted all aspects of the mitochondrial life cycle, i.e. fission-fusion, biogenesis, and mitophagy. Dnm1L gene expression and Drp1Ser616 phosphorylation were markedly increased by acute exercise and declined to baseline during post-exercise recovery. Dnm1L expression was strongly associated with transcripts known to regulate mitochondrial metabolism and adaptations to exercise. Exercise increased the expression of DNM1L in skeletal muscle of healthy control and DG subjects, despite a 15% ↓(P = 0.01) in muscle DNM1L expression in DG at baseline. To interrogate the role of Dnm1L further, we exercise trained male mDrp1+/- mice and found that Drp1 deficiency reduced muscle endurance and running performance, and altered muscle adaptations in response to exercise training. CONCLUSION:Our findings highlight the importance of mitochondrial dynamics, specifically Drp1 signaling, in the regulation of exercise performance and adaptations to endurance exercise training.
https://escholarship.org/search/?q=author%3A%22Shirihai%2C%20Orian%22
Cardiac Output What is responsible for a higher maximal cardiac output? Q = HRmax x SVmax Does HRmax increase with training? Does SVmax increase with training? 5 Stroke Volume What is responsible for a higher SVmax? 6 LEFT VENTRICULAR HYPERTROPHY 7 STROKE VOLUME AND TRAINING 8 DIFFERENCES IN EDV, ESV, AND EF Filling Volume Residual Volume Percent of Total Volume Ejected 9 Stroke Volume A larger and stronger heart produces an increase in stroke volume at rest, submaximal exercise and maximal exercise A higher stroke volume at rest and submaximal exercise will allow for a lower heart rate without changing cardiac output 10 Stroke Volume A higher maximal stroke volume will produce a higher cardiac output A higher cardiac output will produce a higher VO2max A higher VO2max indicates a greater ability for aerobic energy production 11 Stroke Volume What type of aerobic training is most effective in strengthening the heart and thus increasing stroke volume? 12 Heart Rate What affect will a larger SV have on resting HR? What affect will a larger SV have on submaximal exercise HR? What affect will a larger SV have on maximal exercise HR? 13 HEART RATE AND TRAINING 14 Heart Rate Recovery Period w The time after exercise that it takes your heart to return to its resting rate w With training, heart rate returns to resting level more quickly after exercise w Has been used as an index of cardiorespiratory fitness w Conditions such as altitude or heat can affect it w Should not be used to compare individuals to one another 15 HEART RATE RECOVERY AND TRAINING 16 Blood Flow What other changes occur with training that allow for an increase in blood flow to the muscle? Capillaries? Blood? 17 Capillaries 18 BLOOD AND PLASMA VOLUME AND TRAINING Blood Volume? Red Blood Cells? Hematrocrit? Viscosity? Blood flow distribution? 19 Blood Volume and Training w Endurance training, especially intense training, increases blood volume. w Blood volume increases due to an increase in plasma volume (increases in ADH, aldosterone, and plasma proteins cause more fluid to be retained in the blood). w Red blood cell volume increases, but increase in plasma volume is higher; thus, hematocrit decreases. w Blood viscosity decreases, thus improving circulation and enhancing oxygen delivery. w Changes in plasma volume are highly correlated with changes in SV and VO2max. . 20 a-v O2 difference What else needs to happen beside an increase in blood flow and blood volume in order for VO2max to increase? Capillaries Myoglobin Mitochondria 21 Cardiovascular Adaptations to Training Cardiac Output w Left ventricle size and wall thickness increase w Stroke volume increases, as does Qmax and VO2max w Resting and submaximal heart rates decrease w Maximal heart rate stays the same or decreases w Blood volume increases w Increase in a-v O2 difference w More capillaries, myoglobin and mitochondria a-v O2 difference 22 Cardiovascular Adaptations to Training VO2 Q HR SV a-v O2 Difference Rest Same Dec Inc Submax (same intensity) Performance Max 23 Blood Pressure and Training w Blood pressure changes little during submaximal or maximal exercise. w Resting blood pressure (both systolic and diastolic) is lowered with endurance training in individuals with borderline or moderate hypertension. w Blood pressure during lifting heavy weights can cause increases in systolic and diastolic blood pressure, but resting blood pressure after weight lifting tends to not change or decrease. 24 Lactate Threshold What affect would an increase oxygen supply to the muscles during exercise have on the lactate threshold? What affect would this have on aerobic performance? 25 BLOOD LACTATE AND TRAINING 26 At Rest At rest the heart can supply all the needed oxygen with a cardiac output of 5 liters per minute. If the resting stroke volume is higher due to aerobic training, how will the resting heart rate be different? What about parasympathetic stimulation? 27 Submaximal Exercise Before training, running at 6 mph required a cardiac output of 15 liters. Also, before training this required a heart rate of 140 bpm Since after weeks of training stroke volume increases, what will happen to the heart rate while running at 6 mph? Why? What would happen to the running speed if the trained person now ran at a heart rate of 140 bpm? If the lactate threshold used to occur at 6 mph, at what speed will it occur now? Why? 28 Maximal Exercise Increase in VO2max Increase SV and blood volume Indicator of aerobic fitness level 30 CHANGE IN RACE PACE, NOT VO2MAX . 31 Aerobic Endurance and Performance w Major defense against fatigue which limits optimal performance. w Should be the primary emphasis of training for health and fitness. w All athletes can benefit from maximizing their endurance. 32 Respiratory Adaptations to Training w Static lung volumes remain unchanged; tidal volume, unchanged at rest and during submaximal exercise, increases with maximal exertion. w Respiratory rate stays steady at rest, decreases with submaximal exercise, and can increase dramatically with maximal exercise after training. w Pulmonary ventilation increases during maximal effort after training. (continued) 33 Respiratory Adaptations to Training w Pulmonary diffusion increases at maximal work rates. w The a-vO2 diff increases with training due to more oxygen being extracted by tissues. - w The respiratory system is seldom a limiter of endurance performance. w All the major adaptations of the respiratory system to training are most apparent during maximal exercise. Similar presentations © 2021 SlidePlayer.com Inc. All rights reserved.
https://slideplayer.com/slide/4964384/
The cells with the greatest number of mitochondria are fat and muscle cells. The breakdown of adenosine triphosphate, or ATP, which is located in the mitochondria, supplies these cells with an abundance of energy so that they can perform their many duties. Cells are the essential building blocks of all living organisms and occur in a variety of forms, sizes, and shapes. In the majority of multicellular creatures, cells are compartmentalised into structures known as organelles that perform highly specialised functions. These organelles include the plasma membranes, cytoplasm, nuclei, Golgi complexes, channels or pores, endoplasmic reticula, ribosomes, chloroplasts, vesicles, peroxisomes, vacuoles, cell walls, centrioles, lysosomes, cytoskeletons and mitochondria. Mitochondria, considered the “powerhouses” or “energy factories” of the cell, are found in both animal and plant cells. The human body is constituted of over 200 types of cells. The majority of these cells are alive, although the cells in the hair, nails, and certain tooth and bone structures are nonliving. On average, animal cells contain between 1,000 and 2,000 mitochondria. The synthesis of energy in the form of ATP molecules takes place in the mitochondria. Through a sequence of metabolic events known as respiration, these high-energy molecules are produced. Because mitochondria are utilised to store surplus energy, fat cells present in fatty tissues contain an abundance of them. In addition to containing an abundance of mitochondria, muscle cells, which are responsible for body movement, also contain a large number of mitochondria.
https://thewidely.com/what-cells-contain-the-most-mitochondria/
What Is Mitochondrial Dysfunction? Mitochondria are a vital part of all cells in our body. They provide power for the functions that keep us alive and healthy, such as breathing, digesting food, and thinking. But they also have many other jobs apart from being just energy generators. Their duties range from protection against diseases to playing an essential role in the aging process. As a result, if there is a decrease in the function of mitochondria in your cells, the rest of your body can be affected. This reduction in functionality is known as mitochondrial dysfunction, and it can impact your health in several important ways. Let’s talk about mitochondrial dysfunction, how it affects your health, and what supportive measures you can take to optimize the health of these important organelles. What are mitochondria and what do they do? Mitochondria are the energy-producers found in almost all of your cells. They create energy from the food you eat to make adenosine triphosphate (ATP). ATP is the energy-carrying compound used by cells to fuel your body's functions. According to a review published in Molecular Cell, mitochondria also play a role in supporting your immune system and cellular signaling when the body is under stress. Mitochondria come in a variety of shapes and sizes, depending on where they are found. A review published in EMBO Reports explains that the shape of mitochondria influences their function. This is especially important to note because, as illustrated in International Journal of Molecular Sciences, mitochondrial shape and functionality appear to be very sensitive to environmental exposures like pesticides. In other words, exposure to toxins, even commonly used pesticides, may impact your mitochondrial health and increase your risk of developing certain chronic health conditions. What is mitochondrial dysfunction? Mitochondrial dysfunction occurs when your mitochondria lose the ability to function normally. It can happen if the mitochondria present in your cells are not functioning as they should. What is the relationship between mitochondria and free radicals? When mitochondria make ATP, they also generate free radicals, which are molecules that can cause damage to proteins and DNA in your body. Ideally, free radicals are balanced in the body by antioxidants that help make them less dangerous. Free radicals are also produced in response to different environmental factors such as exposure to toxins and inflammation, but mitochondria are the primary producers in your body. At the same time, mitochondria help clean up free radicals, so it's a delicate balance between the two. When mitochondria are dysfunctional, they no longer produce ATP efficiently while increasing their production of free radicals, throwing off that balance. According to the review article published in npj Regenerative Medicine, if mitochondria are damaged, not only is energy metabolism affected, but they can also produce more free radicals increasing the potential for oxidative damage in the body. As described in Cell Death and Disease, mitochondrial dysfunction can also influence antioxidant activity in the cell, and the entire cell can be damaged or destroyed. Essentially the entire health of your cell can be affected as a result of improperly functioning mitochondria. What causes mitochondrial dysfunction? In a healthy state, mitochondrial numbers are balanced by creating new functional mitochondria (mitochondrial biogenesis) and removing any that are damaged or dysfunctional (mitophagy). But this process can be negatively impacted by many reasons. Damage to mitochondria from free radicals and other environmental factors like toxins or inflammation may lead to mitochondrial dysfunction as the cells become overwhelmed with free radical production. Additionally, if there is inadequate availability of nutrients such as B vitamins, mitochondria will not produce the energy they need. Several health conditions may also result in mitochondrial dysfunction because their mechanisms can lead to oxidative stress or inflammation that affects mitochondrial function. The result? As described in the journal Biology, the presence or accumulation of dysfunctional mitochondria can increase and eventually contribute to accelerated aging and adverse health outcomes. How does mitochondrial dysfunction influence aging? While mitochondrial dysfunction can result from your environment, research from The Journals of Gerontology suggests that the function and number of mitochondria also naturally decline as we age. Older mitochondria can also change shape, generate more free radicals, and become less efficient at producing ATP. However, it may be a vicious cycle. Increases in inflammation and reduced antioxidant activity also naturally increase as you age. These changes increase free radical production and affect the health of mitochondria. And once again, as described in a review from the journal Cell, poorly functioning mitochondria can increase the production of free radicals, which only makes mitochondrial dysfunction worse. As noted in The Journal of Signal Transduction, these factors add up to influence the aging process and age-related health conditions. Chronic health conditions and mitochondrial dysfunction Mitochondria are also closely connected to several health conditions usually associated with aging. It's well accepted that inflammation and oxidative stress are related to poor health. As written in Endocrine Reviews, mitochondrial health may also be an important piece of the puzzle. Several of the adverse outcomes associated with mitochondrial dysfunction include: Cognitive and neurodegenerative conditions. Conditions related to brain health are often considered diseases of aging. A review from BBA - Molecular Basis for Disease suggests that defects in the way the mitochondria processes ATP, an increase in free radical generation, and the production of specific proteins that stress the mitochondria have all been associated with certain conditions affecting the brain. Blood sugar balance. As illustrated in Antioxidant and Redox Signaling, people with conditions impacting blood sugar balance may also have dysfunctional mitochondria with irregular shapes. However, researchers aren't clear if these changes cause blood sugar imbalances or are a result. Fatigue. Unexplained, unrelenting fatigue is also a hallmark of mitochondrial dysfunction. A review from Metabolic Brain Disease suggests that people who suffer from debilitating fatigue associated with certain health conditions may have dysfunctional mitochondria. Signs of aging. Even the aging process seen in the skin may have a relationship to mitochondrial defects. A review published in Cell Death and Disease found that aging skin is associated with damaged mitochondria, high amounts of free radicals, and oxidative stress. As a result, supporting the health of your mitochondria may offer protection against age-related conditions. What can you do to keep your mitochondria healthy? While you can't stop time, you can support your mitochondria with these healthy habits: Intermittent fasting. As seen in a study published in PLOS One, intermittent fasting may support improvements in mitochondrial health by reducing the impact of environmental influences like a high-fat diet. Fasting may support the critical balance between mitophagy and mitochondrial biogenesis, helping remove dysfunctional mitochondria while increasing the number of healthy functioning mitochondria. Exercise. According to a symposium review published in The Journal of Physiology, physical activity supports healthy mitochondria not just in muscle cells but throughout the body. This review suggests that both endurance activity, as well as high-intensity training, can be beneficial. Diet. As explained in a review published in Clinical Nutrition, a healthy diet rich in micronutrients is vital to support healthy energy metabolism and mitochondrial health. Additionally, certain inflammatory foods such as trans fats or heavily processed items can contribute to inflammation in the body and increase free radical production. Mitochondrial supplements. Several supplements have valid research behind their use for supporting your mitochondria, especially relating to the relationship between mitochondrial health and fatigue. These include CoQ10, alpha-lipoic acid, and NAD precursors. Protecting your mitochondria is an investment in your health Mitochondrial dysfunction happens when the function of mitochondria is reduced. It is associated with accelerated aging and certain chronic health conditions. It's important to note that mitochondrial dysfunction is different from genetic mitochondrial diseases, which are inherited disorders. You can support mitochondrial health through lifestyle habits that provide an optimal cellular environment. Reducing the impact of free radical damage and inflammation while increasing the generation of new mitochondria is critical for an optimal balance. Taking steps to keep your mitochondria healthy and happy is beneficial regardless of your age—it's never too early or too late to get started.
https://www.truniagen.com/blog/science-101/what-is-mitochondrial-dysfunction/
In an aging society, health in old age plays an increasingly important role. For each individual, but also for society and the healthcare system. It is not primarily about simply extending the maximum lifespan ("Lifespan") or even about "immortality", but rather about the unfortunately often long infirmity at the end of life avoid it or at least significantly shorten it and extend as much as possible the period of life that we can enjoy in the best of health ("Healthspan"). Why are people getting older today than they used to? For humans, external factors such as better hygiene, nutrition and medical care have led to a significant increase in average life expectancy in industrialized nations: Inhabitants of Germany who are 100 years and older (source: Stat. BA, Human Mortality Database, Robert Bosch Stiftung): - 1980: 975 (GDR + FRG) - 2000: 5,699 - 2017: 14,194 - 2037 (e): ~140,000 Proportion of people over 80 in Germany: - 1950: 0.1% - 1975: 2.2% - 2000: 3.6% - 2025(e): 7.4% - 2050(e): 13.2% Some aging researchers, however, doubt whether the maximum age that can be reached, the so-called maximum lifespan, can be extended. Because unlike the average life expectancy, the maximum has hardly increased: The person with the longest documented age was Jeanne Calment from France, who was born in 1875 and died in 1997. She was exactly 122 years and 164 days old. This means that since the year of their birth there has not been anyone who has aged despite all hygienic and medical advances. This suggests that the maximum human lifespan is around 120 years. For example, why are Japanese, French and Italians on the list of the oldest people, but no Germans? Of particular interest to the Longevity researchers are the so-called “blue zones”, in which a conspicuous number of centenarians live. Sardinia and the Japanese island of Okinawa are among them. Studies on the causes of longevity in these zones have shown that the very old there have eaten healthily all their lives, especially little meat (but not vegetarian), exercised regularly but moderately - and they all had up to strong social bonds at the end of life. According to a US meta-study from 2010, people with a lot of social contacts have a 50% lower risk of dying earlier than expected. Of course, loneliness doesn't have an immediate physical impact, but it does have an indirect one—because lonely people smoke more, are more likely to be overweight, and less physically active. Long-term stress also makes you age faster, because then more damaging stress hormones are released. In addition, in the “Blue Zones” one measures abnormally high spermidine levels in the blood. Spermidine is ingested through food (plants produce it themselves, especially in stressful situations) and also produced in the body (especially by the microbiome in the intestine). Spermidn stimulates autophagy, i.e. the cellular "recycling process". Fermented soy (the Japanese natto), nuts, mushrooms, wheat germ, old/ripened cheeses and green vegetables are particularly rich in spermidine. All ingredients of the cuisine in the Blue Zones of Japan, Italy and France. It seems that stress and diet in particular stand in the way of a particularly high longevity in Germany. Aging processes begin at a young age: primary and secondary aging So-called "primary aging" begins around the age of 25.Year of life: by ~1% pa. the cell performance or cell competencies decrease. Of course, this only affects those cells that are not renewed. For example, the stem cells that are relevant for longevity are not renewed. Examples: - Eyes: the elasticity of the lens already decreases at the age of 15, close vision decreases at the age of 40 and there is a risk of cataracts in old age - Ears: ~from the age of 20, the number of hair cells in the cochlea, which are important for the perception of sounds, decreases. Age-related hearing loss often sets in from the age of 60 - Lungs: at the age of 20, the production of alveoli decreases; as the elasticity of the lungs decreases, the volume of air that can be inhaled and exhaled becomes smaller - Reproductive organs: from the age of 25 the fertility of women decreases, in men the testosterone level falls - Joints: from the age of 30, the cartilage loses its elasticity and the intervertebral discs become stiffer - Skin: from the age of 30, the skin can bind less moisture and loses elasticity - Hair: from the age of 30, the production of the pigment melanin decreases and then stops completely - Bones: between the ages of 30 and 40, bone loss begins to outweigh the formation, so that an 80-year-old only has about 50% of the maximum bone substance - Muscles: muscle loss begins from the age of 40 - a 65-year-old has about 10 kg less muscle mass than a 25-year-old - Kidneys: at the age of 50, the filtration capacity decreases, so that blood purification takes longer and is less effective - Brain: from the age of 60, the ability to react decreases, the ability to coordinate and the memory deteriorate - Heart: at the age of 65, the heart can show signs of senility because, for example, the blood vessels calcify and the heart has to pump against a higher resistance - Immune system: at 65 the susceptibility to infection increases because the number of immune cells in the blood decreases In the sixties, the so-called "secondary aging" usually becomes noticeable in the form of typical age-related diseases such as arthrosis, stroke, heart attack, dementia, etc. The care and cost-intensive illnesses will therefore increase dramatically, so that health in old age is becoming more and more important both from an individual and from a social point of view. Irrespective of the controversial question of whether aging is a disease, it is important, as with all health issues, not to fight the symptoms of aging with medication, but to focus on the causes of aging Most longevity approaches are not primarily about extending the maximum lifespan, but about pushing secondary aging as far back as possible. I.e. healthy aging is the focus. What happens to a cell as it ages? In order to understand what happens to a cell as it ages, we first need to understand what the core cell functions are. One also speaks here of the so-called "cell competencies" - a concept that goes back to Dr. Thrusher goes back: - renewal The number of divisions that a body cell can undergo is limited. As a result, most of our cells will eventually need to be replaced. Around 50 million cells per second (!) are exchanged in our body. Almost all 30 trillion body cells are replaced within 7 years. Our stem cells are primarily responsible for this cell renewal. Stem cells are the reservoir for various body cells into which the stem cells can differentiate.The only problem is that our stem cells themselves are not replaced and therefore “age” as DNA damage accumulates and the repair systems cannot keep up. However, when cells divide, the stem cell DNA must be copied with absolutely no errors. Therefore, keeping the stem cells healthy is particularly important for a healthy longevity. But at some point the stem cell reservoir will be exhausted and there will be no more supplies. In addition, hematopoietic stem cells can mutate in old age and then remain in the blood as pro-inflammatory clones. The freshwater polyp Hydra has therefore aroused particular interest of the Longevity scientists, because its stem cells are permanently active, so that old cells can always be replaced. The idea of the stem cell researchers is therefore to decode the mechanisms of the loss of function of the stem cells in old age in order to then inhibit them with new therapies and thus be able to prolong organ preservation in old age. The cell types that are not renewed or are only slightly renewed include: nerve, heart muscle and sensory cells (eyes, ears). We cannot stop their aging, so that longevity approaches must focus primarily on these cell types in addition to stem cell health. - Energy production The energy for our cells is produced in the mitochondria, the power plants of our cells. The more energy a cell needs or consumes, the more mitochondria it usually has. A cardiac muscle cell, for example, has 5000 mitochondria! Even at rest, the body needs about as many kg of ATP every day as our body weight! During physical activity, ATP production increases significantly again. From the age of 25, however, the mitochondria already lose their performance; i.e. with the same oxygen consumption, the ATP production decreases, so the mitochondria become less efficient. In old age, the mitochondrial performance has decreased by about 50% (!) - which is partly due to the fact that important elements of the respiratory chain such as coenzyme Q10, niacin (vitamin B3) or the coenzyme NAD+ (nicotinamide adenine dinucleotide) or NADH (reduced form of NAD+) decrease with age. More and more free radicals form in the mitochondria as waste products, which damage genetic material, organs, connective tissue, etc. Disorders of the nervous system, such as Parkinson's disease, are often caused by insufficient energy production in certain nerve cells. See also https://www.hih-tuebingen.de/forschung/neurodegeneration/forschungsgruppen/mitochondriale-biologie-der-parkinson-medizin/?tx_jedcookies_main%5Baction%5D=submit&cHash=2ee0704321cb47f67169ef63d0c1c3d3 Therefore, longevity approaches must above all focus on the relevant factors in the citric acid cycle (upstream of the respiratory chain) and the respiratory chain or electron transport chain, and try to fill the deficiency, e.g. with food supplements: - Coenzyme Q10 (as a redox system (ubiquinone/ubiquinol) central component of the mitochondrial electron transport chain) - L-carnitine (is mainly taken in with food (meat) and transports fatty acids through the mitochondrial membrane; in 2002, a study by the University of Leipzig in vivo demonstrated that L-carnitine reduces the breakdown of long-chain fatty acids in healthy adults without L-carnitine deficiency) - Vitamin B6, B9 (folic acid), B12 as important cofactors Even if we can and should influence mitochondrial performance in this way, there are limits for us Europeans compared to East Africans, for example, when it comes to the performance of our mitochondria.This is due to evolution: due to the nomadic way of life, East Africans had to walk long distances with endurance – and those with the best mitochondria survived. Therefore, even with the best training, a European can never match the energy production of the mitochondria of Kenyans or Ethiopians; so that the latter also regularly win marathons. But regardless of the basic evolutionary equipment, we can train our mitochondria. And good mitochondrial fitness acquired at a young age carries on into old age. In this context, reference is often made to Churchill, who was a competitive athlete when he was young and who, even in old age, benefited from his well-trained mitochondria for a long time, despite a very unhealthy lifestyle. - detoxification Cellular waste is constantly being produced as part of cell metabolism, such as errors in protein synthesis (misfolded proteins) or damaged parts of the mitochondria. This waste is normally broken down by cellular cleansing processes, primarily by what is known as autophagy, the cellular “recycling system”. The lysosomes then dock onto these waste products, and their enzymes break this waste down into its individual components, making it recyclable. Lysosomes are therefore also referred to as the "stomach" of our cells. Unfortunately, in old age, this autophagy no longer works so well, so that molecular garbage accumulates in the cells and ultimately impairs normal cell functions. Over the years, this cellular waste can then contribute to the relevant diseases of old age, such as diabetes, Alzheimer's or Parkinson's. One way to activate autophagy is through caloric restriction (fasting). Because when food is scarce, the body activates autophagy to release nutrients from the "protein waste". And quasi as a side effect of this nutrient extraction, misfolded proteins and defective organelles are broken down. This also fits well with the observation in numerous studies that caloric restriction in laboratory animals has prolonged life and counteracts aging processes. theories of aging - program theories - a) shortening of the telomeres The telomeres are the protective caps at the ends of the chromosomes. With each cell division, they shorten by a defined number of base pairs. The shorter the telomeres are, the worse the copies turn out - until at some point they are so short that no further cell division takes place and the cell dies. The length of the telomeres is considered an indicator of the so-called biological age, in contrast to the chronological age. The shortening of the telomeres is increased by various factors, such as oxidative stress or chronic inflammation. The good news: Studies indicate that telomeres can also lengthen again. There are promising studies for vitamin D, E, ginkgo and omega 3 fatty acids . See also https://www.wissenschaft.de/gesundheit-medizin/langsamer-altern-durch-mediterrane-ernaehrung/ - b) Hormonal control of aging Why do members of a species live a specific lifetime in evolution? Because the conservation of the species is evolutionarily the most important thing. So evolution roughly calibrates lifespans to allow for rearing and sexual maturity. This also explains why the menopause in women only starts in their mid-40s. Therefore, those hormones that are necessary for reproduction also have a decisive influence on lifespan. E.g.estradiol, which is not only a sex hormone but also ensures that the stem cells in the bone marrow are preserved and multiply without differentiating too much. that are urgently needed. - Damage Theories Damage theories target free radicals. Free radicals have an unbonded pair of electrons and are therefore particularly aggressive as they try to snatch an electron from other molecules. In doing so, they are reduced and oxidize the other molecule, which itself becomes a free radical. A chain reaction is set in motion. Free radicals damage tissue and the DNA of our cells, contributing to the aging process and the development of disease. They arise from - Chronic/silent inflammation - AGE formation with high sugar consumption - External induction (smoking, environmental toxins, stress etc) - during ATP synthesis in the mitochondria (oxygen radicals are always formed in the respiratory chain, but their proportion increases with age and ATP production decreases) According to this theory, longevity measures must start with the defusing of free radicals. This is done by so-called antioxidants. We have an endogenous, enzymatic antioxidant system, but it is not always sufficient to effectively defuse all free radicals. Therefore, antioxidants should be supplied from the outside - either through food or highly concentrated through suitable dietary supplements. The particularly effective antioxidants (measured by the so-called ORAC value) include, for example, alpha-lipoic acid, vitamin C and vitamin E. To what extent is our age and health in old age genetically determined? - A) Genetics Everyone knows stories like that of Helmut Schmidt, who has grown very old despite a very unhealthy lifestyle (e.g. chain smoker) - whereas others who live a very healthy life die young. Usually the genes are given as a reason. Researchers are interested in this context, among other things, in the question of whether there is a longevity gene - the "Methusalem gene", so to speak. And indeed, there is the so-called FOX03 protein, which appears to activate the increase in the enzyme sirtuin 1, which is important for longevity. Everyone has this protein - but two specific variants/expressions of FOX03 are conspicuously common in people over 100 years of age. This was discovered in 2009 by the "Healthy Aging" research group at the University of Kiel. These variants of the FOX03 gene were also found in the above-mentioned freshwater polyps, whose stem cells are constantly renewed. Since the two variants of FOX03 only occur in very few people and the genetics cannot be influenced in this regard, this finding has no practical relevance in the context of longevity approaches. Another study, the "New England Centenarian Study", evaluated the data from 1900 over 90-year-olds and found that at very old age the further survival increased to 75 % depends on good genes. I.e. only 25% of survival depends on lifestyle factors. However, it cannot be concluded from this that our fate in relation to our life expectancy is 75% genetically predetermined, because the above-mentioned study expressly only refers to the further life expectancy of all those who have already reached a very old age (>= 90 years). A study that does not only include those people who have already reached a very old age is that of Dr. Graham Ruby, who compiled Ancestry data (Ancestry is the world's largest platform for genealogical research) from around 54 million people and their approx.evaluated 6 billion ancestors. And then a completely different picture emerges: the heritability of the lifespan seems to be only a maximum of 7%. - B) Epigenetics Whereas genetics deals with DNA as the basic genetic equipment that is identical in all our cells, epigenetics deals with the activity state of our genes. Because the fact that our ~250 cell types function so differently, although the DNA is identical, is due to epigenetics, which controls the switching on and off of the genes. Unlike genetics, epigenetics is heavily influenced by lifestyle and environmental factors. Identical twins have almost identical epigenetic patterns after birth, which remain similar even in old age if they have a similar lifestyle, but differ just as much if they have very different lifestyles. How exactly does switching on/off work? About the so-called “methylation”: Methyl groups are molecules made up of one carbon and three hydrogen atoms and are attached to certain parts of the DNA – namely only where the DNA building block group CpG (cytosine-guanine) occurs, where they prevent certain gene sequences from being read , i.e. “turn off genes”. As we age, methylation decreases, which means that genes that are not supposed to be active are also active and produce proteins that are not needed at all or can even cause bad things, such as .Inflammation. Steve Horvath, German professor of human genetics and biostatistics at the University of Los Angeles, evaluated the methylation patterns of thousands of subjects and used them to develop the "epigenetic clock". Similar to the telomeres, the methylation patterns are therefore also used to determine biological age, in contrast to chronological age. Our laboratory partner Cerascreen, for example, developed the Genetic Age Test in 2018 together with the Fraunhofer Institute, which measures biological age based on the methylation pattern: https://qidosha.com/products/dna-biologisches -alter-test-incl-analysis-by-specialist-laboratory-recommendation?_pos=1&_sid=134b31ef8&_ss=r&variant=41732031905962 The question relevant to longevity approaches is whether and if so, how these methylation patterns can be influenced in order to turn back the epigenetic clock. It is known that stress, smoking, obesity have a negative effect on the methylation pattern. Analogously, however, reducing stress can also restore the original methylation. According to the epigeneticist Prof. Isabelle Mansuy from the University of Zurich, nutrition can also counteract the reduction in methylation: this is how broccoli or the sulforaphane it contains and v.a. green tea as “methyl donors”. The epigenetic clock can therefore actually be turned back, it seems! Which lifestyle factors are relevant for a long and healthy life? - Nutrition Unsurprisingly, fresh organic vegetables are good for healthy longevity. However, it is less about the harmfulness of pesticides for the body in conventionally grown vegetables, but rather about the fact that plants had to deal with fungi, bacteria, harsh climate etc. without the help of preservatives and are therefore much richer in the for the longevity of phytochemicals that are so important than, for example, greenhouse or conventionally grown vegetables. Eating a high-fiber diet (mushrooms, berries, oatmeal, etc.) is also recommended.), since dietary fiber as prebiotics is "food" for our intestinal bacteria In a diet low in dietary fiber, the intestinal bacteria use the intestinal mucosa as substitute food, so that antigens can get into the body more easily and cause chronic inflammation there, autoimmune diseases or allergies. If this is already the case, the medicinal mushroom Hericium is ideal for rebuilding the mucus layer - see also https://qidosha.com/blogs/qidosha-academy/vitalpilze The often propagated “low carb”, on the other hand, does not generally make sense, because long-chain carbohydrates, which are contained in many vegetables, are very positive for healthy longevity. Low carb makes sense when it refers to sugar, i.e. short-chain carbohydrates, since sugar is not conducive to healthy longevity due to the formation of AGE (advanced glycation end products). AGEs are caused by the permanent accumulation of glucose in protein and fat compounds. As a result, blood vessels lose their elasticity, muscles their ability to stretch, the skin becomes wrinkled - everything "sticks together", becomes rigid. In addition, AGE oxidize LDL particles (low-density lipoprotein = the "bad cholesterol" in contrast to HDL) to form free radicals that damage the vessel walls. In addition, oxidized LDL particles no longer get into the cells and remain in the blood, which increases the cholesterol level and thus the risk of arteriosclerosis. It is also important to avoid highly processed foods, because there are additives such as the binder CMC (carboxylmethylcellulose) contain substances that damage the barrier function of the intestinal mucosa. In addition, they often contain a lot of fat and sugar and little dietary fiber, phytochemicals, omega 3 fatty acids and micronutrients. And last but not least, the caloric restriction already mentioned above – fasting: this forces the cells to autophagy, which decreases with age so that cellular waste can accumulate . The "recycling" of cellular waste is always set in motion when the diet no longer provides enough fuel for the mitochondria. The disposal of cellular waste is thus a desirable side effect of fasting. The first systematic study of the beneficial effects of caloric restriction was by Clive McCay in 1937: a 33% caloric restriction in laboratory rats resulted in a) a significant increase in maximum lifespan and b) a 50-year increase in average lifespan % causes. polyphenols A diet rich in polyphenols is of paramount importance for healthy longevity, so this topic will be dealt with in a separate section. Polyphenols are actually part of the plant's defences. Quercetin appears to be particularly promising because it activates the longevity enzyme sirtuin 6; but also to OPC, curcumin and EGCG (epigallocatechin gallate) in green tea there are promising studies. Strictly speaking, polyphenols are oxidants, not anti-oxidants, as they initially increase the production of free radicals and thus activate the cellular "radical defense" (e.g. catalases) - almost like a vaccination.The activated proteins and enzymes of the radical defense not only render oxygen radicals harmless, but also enzymes are formed as a side effect, which - work against chronic inflammatory processes - maintain muscle mass - Check the DNA for completeness and repair if necessary Green tea contains the highest concentration of EGCG in the plant kingdom, its positive effect on longevity in epidemiological studies (these are observational studies under real conditions - no experimental studies under laboratory conditions) These studies suggest the following effects of EGCG: - Reduces the increase in blood sugar levels after carbohydrate-rich meals - has an anti-inflammatory effect - lowers cholesterol levels and increases the elasticity of blood vessels - inhibits the formation of tumor blood vessels and the growth of polyps in the intestine EGCG should, however, always be consumed as a tea and not as an extract in the form of a dietary supplement, otherwise the high concentration could put too much strain on the liver. - sleep There are 4 deep sleep phases (in different levels) that we should achieve. On the one hand, little energy (ATP) is consumed during deep sleep, and on the other hand, our glymphatic system (the cerebral lymph, the "flushing system" of our brain that drains pollutants, so to speak) is only active during sleep. During sleep, the nerve cells in the brain "shrink", so that the cell gap increases and toxic substances, such as beta-amyloids (precursors of Alzheimer's plaques = insoluble deposits between the nerve cells ) can be washed away more easily. Receptors in the brain determine the day/night rhythm and our depth of sleep - and unfortunately they are not renewed, i.e. they age. In addition, the melatonin level produced by the pineal gland decreases with age, so that the deep sleep phases in older people are often only reached for a short time. As a result, with fewer and shorter deep sleep phases, less energy is available in the form of ATP than in young people and the "flushing system" of the brain lymph described above can no longer function optimally, which leads to the formation of beta -Amyloids and thus Alzheimer's plaques. Cortisol plays a significant role in poor sleep and its impact on healthy longevity. Cortisol is known as the so-called “stress hormone”. Cortisol is produced in the adrenal cortex from its inactive form, cortisone. Cortisol also ensures, among other things, that we soften in the morning. It rises sharply in the morning and then falls more and more as the day progresses. But if we sleep badly, the cortisol level rises less in the morning than with good sleep, in which the deep sleep phases are reached. This is problematic in that a decrease in cortisol can trigger or exacerbate inflammatory processes (the inactive form of cortisone is known to many to treat inflammatory diseases). In this context one also speaks of "InflammAging": When people age, their body's defenses also age: the immune system against pathogens that people have come into contact with, which they have acquired over the course of their lives, gradually shuts down; the innate, non-specific immune system, on the other hand, becomes overactive. This is mainly due to the macrophages, which uncontrollably release inflammatory messengers when there is a cortisol deficiency. The consequences are chronic inflammations such as atherosclerosis or arthritis. - Movement/muscular power From the age of 60, muscle mass decreases and muscle fibers are increasingly replaced by fat and connective tissue. There are three main reasons for this: - The muscle-building hormones (especially the growth hormone STH) decrease drastically. - The proteins that are important for building muscle are no longer as well absorbed by the intestines. - The nerves that activate muscle fibers (motor neurons) die. This leads to age-related muscle wasting and frailty - clear signs of secondary aging. Part of a holistic longevity approach must therefore be to preserve muscle mass as much as possible in old age. Strength training and a good night's sleep (see above) are therefore essential, because both stimulate STH release. Endurance training is also relevant for activating or training the mitochondria. Because in short-term sports, the energy is obtained directly from short-chain carbohydrates (sugar) - it therefore does not train the mitochondria. Essential amino acids such as leucine and the combination of vitamin D3 & K2 are also important for muscle and bone health. - Reactivation of the thymus in old age The thymus is a tiny organ where our T cells are produced. T-cells recognize antigens and virus-infected body cells and kill them. From the ~60. However, the thymus stops functioning as we age, so that the immune system weakens with age. Until recently, scientists believed that the thymus could not be generated. This seems to be changing now: In the so-called TRIIM study (Thymus Regeneration Immune Restoration and Insulin Mitigation) by Dr. Greg Fahy, the subjects took a mix of zinc (about 50 mg), vitamin D (50-70 mcg/ml), metformin (actually a diabetes drug that inhibits the formation of glucose in the liver, causing blood sugar levels to drop; it slows down the process by which the mitochondria extract energy from nutrients) and the sex hormone precursor DHEA. The result: the thymus has regenerated and the biological age has decreased by an average of 2.5 years! Since only 9 subjects took part due to the high costs, and all men, a new study with 85 subjects has now been launched (TRIIM-X) - the results are expected by the end of 2022. If the results of the first study are even approximately confirmed, it would be an absolute sensation and a milestone in longevity research..
https://qidosha.com/en/blogs/qidosha-academy/longevity-fur-eine-gesunde-langlebigkeit
How do other neurologic drugs work? Neurologic drugs are medications used to treat different types of neurological disorders. Medications that do not fall into any specific class of neurologic drugs are categorized as other neurologic drugs. Other neurologic drugs include the following medications which work in different ways: - Edaravone: Edaravone is a medication used to treat amyotrophic lateral sclerosis (ALS), a neurodegenerative disorder that causes progressive muscle wasting. Edaravone is believed to work by reducing oxidative stress, a part of the process that destroys nerve cells (neurons) in patients with ALS. - Oxidative stress is an imbalance between free radicals (reactive oxygen species/ROS) and antioxidants that neutralize them. Free radicals play an essential role in biological processes, but excessive ROS cause cellular and DNA damage. Edaravone acts as an antioxidant, and delays disease progression by scavenging the free radicals. - Elamipretide: Elamipretide is a medication being developed to treat mitochondrial diseases, awaiting FDA approval. Mitochondria are small organelles within cells that generate the energy that all cells need to survive. Neuromuscular disorders are typical features of mitochondrial diseases because neurons and muscle cells have high energy needs. - Elamipretide binds to and enhances the activity of cardiolipin, an essential lipid (fat) component of the inner mitochondrial membrane. Elamipretide improves energy generation, reduces the production of free radicals and oxidative stress, increasing the availability of adenosine triphosphate (ATP), the energy molecules of cells. - Nimodipine: Nimodipine is used in patients who have had a brain aneurysm rupture, to reduce the incidence and severity of neurological deficits caused by reduced blood flow (ischemia) into the brain due to the hemorrhage. Nimodipine prevents cerebral vasospasm and dilates the cerebral arteries, improving blood flow. - Nimodipine is a calcium channel blocker that inhibits the inflow of calcium ions into the smooth muscle cells around the blood vessels, making them relax. Nimodipine crosses the blood-brain barrier easily and has primary effects on the cerebral arteries and minimal effects on the heart. - Onasemnogene abeparvovec: Onasemnogene abeparvovec is used to treat spinal muscular atrophy (SMA) type-1, a neuromuscular disease caused by the absence of or defects in the survival motor neuron 1 (SMN1) gene. The SMN1 gene encodes SMN protein, essential for maintaining the health and normal functioning of motor neurons. - Onasemnogene abeparvovec is a one-time gene replacement therapy administered with intravenous infusion. A non-infectious virus delivers a fully functional SMN1 gene into the motor neurons, which helps increase the production of SMN protein. QUESTIONSee Answer What are the uses of other neurologic drugs? Other neurologic drugs may be administered as oral capsules or solutions, and intravenous (IV) infusions into the vein. Other neurologic drugs may be used in the treatment of conditions that include: - Edaravone: - Elamipretide: (pending FDA approval) - Barth syndrome, is a rare genetic disorder that primarily affects males and causes heart muscle weakness, low white blood cell count, undeveloped skeletal muscles, and muscle weakness - Primary mitochondrial myopathy, muscle disease caused by defects in the mitochondria, the structures within cells responsible for generating energy - Leber’s hereditary optic neuropathy, an inherited form of vision loss (orphan designation) - Nimodipine: - Subarachnoid hemorrhage (bleeding in the space between arachnoid and pia mater, two layers of brain’s membranes), to improve neurological outcome by reducing incidence and severity of ischemic deficits caused by the hemorrhage - Onasemnogene abeparvovec: - Spinal muscular atrophy type-1, is a hereditary genetic disease that affects the central and peripheral nervous systems, and voluntary muscle function What are side effects of other neurologic drugs? Side effects of other neurologic drugs vary with each drug. A few of the most common side effects may include: - Edaravone: - Contusion - Gait disturbance - Headache - Skin and subcutaneous disorders including dermatitis and eczema - Respiratory failure, respiratory disorder, and hypoxia (low oxygen concentration in tissues) - Glycosuria (excessive sugar excretion in urine) - Tinea (fungal) infection - Hypersensitivity reactions and anaphylaxis (severe allergic reaction) - Elamipretide: - Headache - Dizziness - Abdominal pain - Flatulence - Mild redness or itching at the injection site - Nimodipine: - Reduction in systemic blood pressure - Diarrhea - Headache - Abdominal discomfort - Rash - Heart failure - Abnormal ECG and arrhythmia (irregular heartbeat) - Onasemnogene abeparvovec: - Elevated aminotransferases (liver enzymes) - Vomiting - Thrombotic microangiopathy (microscopic clots in small arteries and capillaries) - Acute liver injury and failure - Pyrexia (fever) - Increase in levels of troponin (a protein found in heart muscles) Information contained herein is not intended to cover all possible side effects, precautions, warnings, drug interactions, allergic reactions, or adverse effects. Check with your doctor or pharmacist to make sure these products do not cause any harm when you take them along with other medicines. Never stop taking your medication and never change your dose or frequency without consulting your doctor. SLIDESHOWSee Slideshow What are names of some of the other neurologic drugs? Generic and brand names of some of the other neurologic drugs include:
https://www.rxlist.com/how_do_other_neurologic_drugs_work/drug-class.htm
Our germs have evolved to survive on our unique DNA. Even viruses and bacterial infections that infect one species on our planet will only rarely spread to another. Dogs don’t routinely get the ‘flu, for example. Any alien life form that invaded earth will likely be immune to earthly diseases, so don’t expect a War of the Worlds solution. Much the same as humans have thumbs and therefore can use objects to their advantage, aliens would presumably also need to possess this functionality. One simply cannot expect to build and use tools without the ability to grasp them. Giant blobs, or creatures with long unwieldy tentacles are highly unlikely. How would we expect an advanced species of creatures to construct a spaceship and pilot such a vessel across the vast expanses of space without the ability to hold on to, and move, an object with precision? It is well within reason to expect alien life forms to have an even better developed set of appendages than the fingers and thumb system we use on earth. Much like smartphones and advanced machinery on Earth, alien species would probably adapt their technology to their bodies as much as their bodies to their technology, leaving it difficult, if not impossible, for humans to operate alien devices. Artist, travel writer and adventurer. Find him on Twitter or his blog.
http://listverse.com/2013/11/06/10-traits-aliens-must-have-according-to-science/?utm_source=more&utm_medium=link&utm_campaign=direct
Share with your friends Humans have evolved over millions of years to live on Earth. Now humans are planning long-duration space missions that will require them to live in space for extended periods of time. NASA’s Journey to Mars, the longest manned space mission ever, will require humans to live in space for over three years. Since long-duration space travel necessitates that humans live in environments that we have not evolved to live in, scientists are developing ways to keep humans healthy in space.
https://blog.shepherdresearchlab.org/how-ai-can-help-astronauts-stay-healthy-on-long-duration-space-missions/
The molecular mechanisms have been shown to involve DNA repair enzymes, but the exact nature of these processes is still under investigation. The relative differences between LDMH interactions with human and rodent cells are presented to help in the understanding of possible roles of LDMH in clinical application. The positive aspects of LDMH-brachytherapy for clinical application are sixfold; 1 the thermal goals temperature, time and volume are achievable with currently available technology, 2 the hyperthermia by itself has no detectable toxic effects, 3 thermotolerance appears to play a minor if any role in radiation sensitization, 4 TER of around 2 can be expected, 5 hypoxic fraction may be decreased due to blood flow modification and 6 simultaneous chemotherapy may also be sensitized. Combined LDMH and brachytherapy is a cancer therapy that has established biological rationale and sufficient technical and clinical advancements to be appropriately applied. This modality is ripe for clinical testing. During the current testing program, an Blast Flow vs. Microcontroller uses in Long-Duration Ballooning. This paper discusses how microcontrollers are being utilized to fulfill the demands of long duration ballooning LDB and the advantages of doing so. The Columbia Scientific Balloon Facility CSBF offers the service of launching high altitude balloons k ft which provide an over the horizon telemetry system and platform for scientific research payloads to collect data. CSBF has utilized microcontrollers to address multiple tasks and functions which were previously performed by more complex systems. Related Articles A microcontroller system has been recently developed and programmed in house to replace our previous backup navigation system which is used on all LDB flights. A similar microcontroller system was developed to be independently launched in Antarctica before the actual scientific payload. This system's function is to transmit its GPS position and a small housekeeping packet so that we can confirm the upper level float winds are as predicted from satellite derived models. Microcontrollers have also been used to create test equipment to functionally check out the flight hardware used in our telemetry systems. One test system which was developed can be used to quickly determine if our communication link we are providing for the science payloads is functioning properly. Another system was developed to provide us with the ability to easily determine the status of one of our over the horizon communication links through a closed loop system. This test system has given us the capability to provide more field support to science groups than we were able to in years past. The trend of utilizing microcontrollers has taken place for a number of reasons. By using microcontrollers to fill these needs, it has given us the ability to quickly design and implement systems which meet flight critical needs, as well as perform many of the everyday tasks in LDB. This route has also allowed us to reduce the amount of time required for personnel to perform a number of the tasks required. Long duration performance of high temperature irradiation resistant thermocouples. Many advanced nuclear reactor designs require new fuel, cladding, and structural materials. Data are needed to characterize the performance of these new materials in high temperature, radiation conditions. However, traditional methods for measuring temperature in-pile degrade at temperatures above C degrees. To address this instrumentation need, the Idaho National Laboratory INL developed and evaluated the performance of a high temperature irradiation-resistant thermocouple that contains alloys of molybdenum and niobium. To verify the performance of INL's recommended thermocouple design, a series of high temperature from to C long duration up to six months tests has been initiated. This paper summarizes results from the tests that have been completed. May Archives - I Heart Publix In addition, post test metallographic examinations are discussed which confirm the compatibility of thermocouple materials throughout these long duration , high temperature tests. There are papers on the technology needed for LDSMs. Few are looking at how groundbased pre-mission training and on-board in-transit training must be melded into one training concept that leverages this technology. This certification must ensure, before the crew launches, that they can handle any problem using on-board assets without a large ground support team. Personal growth following long-duration spaceflight. As long-duration missions are and will remain the norm, it is important for the space agencies and the voyagers themselves to develop a better understanding and possible enhancement of this phenomenon. TFTR neutral -beam test facility. A test installation for one source was set up using prototype equipment to discover and correct possible deficiencies, and to properly coordinate the equipment. This test facility represents the first opportunity for assembling an integrated system of hardware supplied by diverse vendors, each of whom designed and built his equipment to performance specifications. For the installation and coordination of the different portions of the total system, particular attention was given to personnel safety and safe equipment operation. This paper discusses various system components, their characteristics, interconnection and control. Results of the recently initiated test phase will be reported at a later date. A century after Viktor Hess' discovery of cosmic rays, balloon flights still play a central role in the investigation of cosmic rays over nearly their entire spectrum. We report on the current status of NASA balloon program for particle astrophysics, with particular emphasis on the very successful Antarctic long-duration balloon program, and new developments in the progress toward ultra- long duration balloons. In , on the eve of human space travel Strughold first proposed a simple classification of the present and future stages of manned flight that identified key factors, risks and developmental stages for the evolutionary journey ahead. As we look to optimize the potential of the ISS as a gateway to new destinations, we need a current shared working definitional model of long duration human space flight to help guide our path. Initial search of formal and grey literature augmented by liaison with subject matter experts. Search strategy focused on both the use of term long duration mission and long duration spaceflight, and also broader related current and historical definitions and classification models of spaceflight. The related sea and air travel literature was also subsequently explored with a view to identifying analogous models or classification systems. There are multiple different definitions and classification systems for spaceflight including phase and type of mission, craft and payload and related risk management models. However the frequently used concepts of long duration mission and long duration spaceflight are infrequently operationally defined by authors, and no commonly referenced classical or gold standard definition or model of these terms emerged from the search. The categorization Cat system for sailing was found to be of potential analogous utility, with its focus on understanding the need for crew and craft autonomy at various levels of potential adversity and inability to gain outside support or return to a safe location, due to factors of time, distance and location. This slide presentation reviews the development and use of a tool for assessing spaceflight cognitive ability in astronauts. This tool. Its use is medically required for all long-duration missions and it contains a battery of five cognitive assessment subtests that are scheduled monthly and compared against the individual preflight baseline. Its purpose is to provide ISS crew surgeons with an objective clinical tool after an unexpected traumatic event, a medical condition, or the cumulative effects of space flight that could negatively affect an astronaut's cognitive status and threaten mission success. WinSCAT was recently updated to add network capability to support a 6-person crew on the station support computers. ISS performance data were assessed to compare initial to modified interpretation rules for detecting potential changes in cognitive functioning during space flight. Applying the newly derived rules to ISS data results in a number of off-nominal performances at various times during and after flight.. Correlation to actual events is needed, but possible explanations for off-nominal performances could include actual physical factors such as toxic exposure, medication effects, or fatigue; emotional factors including stress from the mission or life events; or failure to exert adequate effort on the tests. Game-based evaluation of personalized support for astronauts in long duration missions. Long duration missions set high requirements for personalized astronaut support that takes into account the social, cognitive and affective state of the astronaut. Such support should be tested as thoroughly as possible before deployment into space. The in-orbit influences of the astronaut's state. Activity enhances dopaminergic long-duration response in Parkinson disease. Lynn; Grimes, David A. Objective: We tested the hypothesis that dopamine-dependent motor learning mechanism underlies the long-duration response to levodopa in Parkinson disease PD based on our studies in a mouse model. By data-mining the motor task performance in dominant and nondominant hands of the subjects in a double-blind randomized trial of levodopa therapy, the effects of activity and dopamine therapy were examined. Results: The mean change in finger-tapping counts from baseline before the initiation of therapy to predose at 9 weeks and 40 weeks increased more in the dominant compared to nondominant hand in levodopa-treated subjects in a dose-dependent fashion. There was no significant difference in dominant vs nondominant hands in the placebo group. The short-duration response assessed by the difference of postdose performance compared to predose performance at the same visit did not show any significant difference between dominant vs nondominant hands. Such effect was confined to dopamine-responsive symptoms and not seen in dopamine-resistant symptoms such as gait and balance. We propose that long-lasting motor learning facilitated by activity and dopamine is a form of disease modification that is often seen in trials of medications that have symptomatic effects. Barta, Daniel J. A new wastewater recovery system has been developed that combines novel biological and physicochemical components for recycling wastewater on long duration human space missions. At its center are two unique game changing technologies: 1 a biological water processor BWP to mineralize organic forms of carbon and nitrogen and 2 an advanced membrane processor Forward Osmosis Secondary Treatment for removal of solids and inorganic ions. The AWP is designed for recycling larger quantities of wastewater from multiple sources expected during future exploration missions, including urine, hygiene hand wash, shower, oral and shave and laundry. The BWP utilizes a single-stage membrane-aerated biological reactor for simultaneous nitrification and denitrification. Best Steamboat Springs, CO Hotel Specials & Deals The BWP has been operated continuously for over days. If the wastewater is slighty acidified, ammonia rejection is optimal. This paper will provide a description of the technology and summarize results from ground-based testing using real wastewater. ORNL keV neutral beam test facility. The test facility can simulate a complete beam line injection system and can provide a wide range of experimental operating conditions. Herein is offered a general description of the facility's capabilities and a discussion of present system performance. Advancing technology, coupled with the desire to explore space has resulted in increasingly longer manned space missions. Although the Long Duration Space Flights LDSF have provided a considerable amount of scientific research on human ability to function in extreme environments, findings indicate long duration missions take a toll on the individual, both physiologically and psychologically. These physiological and psychological issues manifest themselves in performance decrements; and could lead to serious errors endangering the mission, spacecraft and crew. The purpose of this paper is to document existing knowledge of the effects of LDSF on performance, habitability, and workload and to identify and assess potential tools designed to address these decrements as well as propose an implementation plan to address the habitability, performance and workload issues. - discount coupons for advantage rental car. - Ocello Sponge Coupons. - google drive coupon codes. - Before Header; Tokamak Fusion Test Reactor neutral beam injection system vacuum chamber. The chamber will have a number of unorthodox features to accomodate both neutral beam and TFTR requirements. The design constraints, and the resulting chamber design, are presented. Production and utilization of high level and long duration shocks.
https://deckvingpasys.cf/
Stay connected with NASA's human exploration activities in and beyond low Earth orbit. From the time of our birth, humans have felt a primordial urge to explore -- to blaze new trails, map new lands, and answer profound questions about ourselves and our universe. Human exploration of a near-Earth asteroid. Image credit: John Frassanito & Associates The Exploration Technology Development Program (ETDP) develops long-range technologies to enable human exploration beyond Earth orbit. ETDP also integrates and tests advanced exploration systems to reduce risks and improve the affordability of future missions. Projects The projects in the Exploration Technology Development Program were formulated to address the high priority technology needs for human spaceflight. All technology projects are managed at NASA Centers. | | Advanced In-Space Propulsion: This project develops concepts, technologies, and test methods for high-power electric propulsion and nuclear thermal propulsion systems to enable low-cost and rapid transport of cargo and crew beyond low Earth orbit. | | Autonomous Systems and Avionics: This project develops and demonstrates integrated autonomous systems capable of managing complex operations in space to reduce crew workload and dependence on support from Earth. Technologies will address operations in extreme environments, efficient ground-based and on-board avionics systems and operations, and cost-effective human-rated software development. | | Cryogenic Propellant Storage and Transfer: This project develops technologies to enable long-duration storage and in-space transfer of cryogenic propellants. Technology development includes active cooling of propellant tanks, advanced thermal insulation, measurement of propellant mass, liquid acquisition devices, and automated fluid couplings for propellant transfer between vehicles. | | Entry, Descent, and Landing (EDL) Technology: This project develops advanced thermal protection system materials, aerothermodynamics modeling and analysis tools, and concepts for aerocapture and atmospheric entry systems for landing large payloads safely and precisely on extra-terrestrial surfaces and returning to Earth. | | Extravehicular Activity Technology: This project develops component technologies for advanced space suits to enable humans to conduct "hands-on" surface exploration and in-space operations outside habitats and vehicles. Technology development includes portable life support systems, thermal control, power systems, communications, avionics, and information systems, and space suit materials. | | High-Efficiency Space Power Systems: This project develops technologies to provide low-cost, abundant power for deep-space missions, including advanced batteries and regenerative fuel cells for energy storage, power management and distribution, solar power generation, and nuclear power systems. A major focus will be on the demonstration of dual-use technologies for clean and renewable energy for terrestrial applications. | | Human Robotic Systems: This project develops advanced robotics technology to amplify human productivity and reduce mission risk by improving the effectiveness of human-robot teams. Key technologies include teleoperation, human-robot interaction, robotic assistance, and surface mobility systems for low-gravity environments. Early demonstrations will focus on human teams interacting with multiple robotic systems. Longer-term demonstrations will focus on enabling operations in remote, hostile environments with limited support from Earth. | | In-Situ Resource Utilization: This project will enable sustainable human exploration by using local resources. Research activities are aimed at using lunar, asteroid, and Martian materials to produce oxygen and extract water from ice reservoirs. A flight experiment to demonstrate lunar resource prospecting, characterization, and extraction will be considered for testing on a future robotic precursor exploration mission. Concepts to produce fuel, oxygen, and water from the Martian atmosphere and from subsurface ice will also be explored. | | Life Support and Habitation Systems: This project develops technologies for highly reliable, closed-loop life support systems, radiation protection technology, environmental monitoring and control technologies, and technologies for fire safety to enable humans to live for long periods in deep-space environments. | | Lightweight Spacecraft Materials and Structures: This project develops advanced materials and structures technology to enable lightweight systems to reduce mission cost. Technology development activities focus on structural concepts and manufacturing processes for large composite structures and cryogenic propellant tanks for heavy lift launch vehicles, and on fabric materials and structural concepts for inflatable habitats. Advanced Exploration Systems Projects Advanced exploration systems incorporate new technologies to enable future capabilities for deep space exploration. Prototype systems are demonstrated in ground tests and flight experiments. | | Multi-Mission Space Exploration Vehicle: This project is developing a prototype crew excursion vehicle to enable exploration of Near Earth Asteroids and planetary surfaces. | | Deep Space Habitat: This project is developing concepts and prototype subsystems for a habitat that will allow the crew to live and work safely in deep space. | | Autonomous Precision Landing Systems: This project is developing optical sensors, and navigation and control algorithms to enable the capability for autonomous precision landing on the Moon or Mars. The autonomous precision landing system will be demonstrated in flight tests of a small lander. | | Analogs: This project is demonstrating prototype systems and operational concepts for exploration of Near Earth Asteroids and Mars in simulations, desert field tests, underwater environments, and ISS flight experiments.
https://www.nasa.gov/exploration/technology/index.html
NASA-Johnson Space Center is designing and building a habitat (Bioregenerative Planetary Life Support Systems Test Complex, BIO-Plex) intended for evaluating advanced life support systems developed for long-duration missions to the Moon or Mars where all consumables will be recycled and reused. A food system based on raw products obtained from higher plants (such as soybeans, rice, and wheat) may be a central feature of a biologically based Advanced Life Support System. To convert raw crops to edible ingredients or food items, multipurpose processing equipment such as an extruder is ideal. Volatile compounds evolved during the manufacturing of these food products may accumulate and reach toxic levels. Additionally, off-odors often dissipated in open-air environments without consequence may cause significant discomfort in the BIO-Plex. Rice and defatted soy flours were adjusted to 16% moisture, and triplicate samples were extruded using a tabletop single-screw extruder. The extrudate was collected in specially designed Tedlar bags from which air samples could be extracted. The samples were analyzed by GC-MS with special emphasis on compounds with Spacecraft Maximum Allowable Concentrations (SMACs). Results showed a combination of alcohols, aldehydes, ketones, and carbonyl compounds in the different flours. Each compound and its SMAC value, as well as its impact on the air revitalization system, was discussed. Document ID 20040141612 Document Type Reprint (Version printed in journal) External Source(s) doi:10.1021/bp990149i Authors Vodovotz, Y. (NASA Johnson Space Center Houston TX United States) Zasypkin, D. Lertsiriyothin, W. Lee, T. C. Bourland, C. T. Date Acquired August 22, 2013 Publication Date March 1, 2000 Publication Information Publication: Biotechnology progress Volume: 16 Issue: 2 ISSN:
https://ntrs.nasa.gov/search.jsp?R=20040141612&hterms=rice+flour&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Drice%2Bflour
Moon to Mars: What’s Beyond? |Moon to Mars and Beyond Report | Image Credit: moontomars.org On January 14, 2004, President George W. Bush announced a new vision for America’s civil space program that calls for human and robotic missions to the Moon, Mars, and beyond: "Today, humanity has the potential to seek answers to the most fundamental questions posed about the existence of life beyond Earth. Telescopes have found planets around other stars. Robotic probes have identified potential resources on the Moon, and evidence of water – a key ingredient for life – has been found on Mars and the moons of Jupiter." - see full report of President’s Commission online This vision set forth goals of: returning the Space Shuttle safely to flight; completing the International Space Station (ISS); phasing out the Space Shuttle when the ISS is complete (about 2010); sending a robotic orbiter and lander to the Moon; sending a human expedition to the Moon as early as 2015, but no later than 2020; conducting robotic missions to Mars in preparation for a future human expedition; and conducting robotic exploration across the solar system. Ray Bradbury, celebrated author of The Martian Chronicles, testified to the Commission about the importance of exploration. When presented with this challenge of travel to Mars, he said, "Our children will point to the sky and say YES!" Spaceflight is difficult, hazardous, and confronted by enormous distances, at least in human terms. Despite extensive safety precautions, during its 144 human space missions the United States has lost 17 astronauts. The pursuit of discovery is a risky business, and it will continue to be so for the foreseeable future. Perhaps of greatest relevance are resources required by humans to live and work in space. For example, the common H2O (water) molecule can yield oxygen to breathe, water to drink, and oxygen and hydrogen as propellants. Fortunately, these potential resources exist in some form in abundance at the first two human destinations, the Moon and Mars. Currently, there are many unknowns about the extraction of useful materials and the operations needed to support such activity. These issues will require expertise from both the aerospace and mining industries. Enabling Technologies There was significant agreement that helped the Commission identify 17 areas for initial focus. Surely others will emerge over time. At this juncture, we identify the following enabling technologies, which are not yet prioritized: - Affordable heavy lift capability – technologies to allow robust affordable access of cargo, particularly to low-Earth orbit. - Advanced structures – extremely lightweight, multi-function structures with modular interfaces, the building-block technology for advanced spacecraft. "lessons of galactic, stellar, and planetary history tell about the future and our place in the universe." Image Credit: moontomars.org - High acceleration, high life cycle, reusable in-space main engine – for the crew exploration vehicle. - Advanced power and propulsion – primarily nuclear thermal and nuclear electric, to enable spacecraft and instrument operation and communications, particularly in the outer solar system, where sunlight can no longer be exploited by solar panels. - Cryogenic fluid management – cooling technologies for precision astronomical sensors and advanced spacecraft, as well as propellant storage and transfer in space. - Large aperture systems – for next-generation astronomical telescopes and detectors. - Formation flying – for free-space interferometric applications and near-surface reconnaissance of planetary bodies. - High bandwidth communications – optical and high-frequency microwave systems to enhance data transmission rates. - Entry, descent, and landing – precision targeting and landing on "high-g" and "low-g" planetary bodies. - Closed-loop life support and habitability – Recycling of oxygen, carbon dioxide, and water for long-duration human presence in space. - Extravehicular activity systems – the spacesuit of the future, specifically for productive work on planetary surfaces. - Autonomous systems and robotics – to monitor, maintain, and where possible, repair complex space systems. - Scientific data collection/analysis – lightweight, temperature-tolerant, radiation-hard sensors. - Biomedical risk mitigation – space medicine; remote monitoring, diagnosis and treatment. - Transformational spaceport and range technologies – launch site infrastructure and range capabilities for the crew exploration vehicle and advanced heavy lift vehicles. - Automated rendezvous and docking – for human exploration and robotic sample return missions. - Planetary in situ resource utilization – ultimately enabling us to "cut the cord" with Earth for space logistics. A science research agenda can be organized around the following broad themes: - Origins – the beginnings of the universe, our solar system, other planetary systems, and life. - Evolution – how the components of the universe have changed with time, including the physical, chemical, and biological processes that have affected it, and the sequences of major events. - Fate – what the lessons of galactic, stellar, and planetary history tell about the future and our place in the universe. A Notional Science Research Agenda Origins - The Big Bang, the structure and composition of the universe including the formation of galaxies and the origin of dark matter and dark energy. - Nebular composition and evolution – gravitational collapse and stellar ignition. "Planetary in situ resource utilization [is about] ultimately enabling us to ‘cut the cord’ with Earth Image Credit: moontomars.org - Formation of our solar system and other planetary systems; clues to the origin of the solar system found in meteorites, cosmic dust, asteroids, comets, Kuiper Belt Objects, and samples of planetary surfaces. - Pre-biotic solar system organic chemistry – locations, histories, and processes; emergence of life on Earth; interplay between geological and astronomical processes. Evolution - The Universe – processes that influence and produce large-scale structure, from sub-nuclear to galactic scales. - Stellar Evolution – nucleosynthesis and evolutionary sequences, including the influence of particles and fields on the space environment. - Planetary Evolution – the roles of impact, volcanism, tectonics, and orbital or rotational dynamics in shaping planetary surfaces; structure of planetary interiors. - Comparative Planetology – study of Earth as a terrestrial planet; divergence of evolutionary paths of Earth, Venus, and Mars; comparisons of giant planets and extrasolar planets. - Atmospheres – early evolution and interaction with hydrospheres; longterm changes and stability. - Search for Habitable Environments – identification and characterization of environments potentially suitable for the past existence and present sustenance of biogenic activity. Fate - Biology of species in space – micro- and fractional gravity, long-term effects of exposure to variable gravity; radiation; avoidance and mitigation strategies. - Impact Threat – cataloguing and classification of near-Earth objects; estimation of the recent impact flux and its variations; flux variation with position in solar system; hazard avoidance and mitigation. - Natural hazard assessment – Advanced space-based characterization of meteorological, oceanic, and solid Earth natural hazards to diminish consequences and advance toward predictive capability. - Temporal variations in solar output – monitoring and interpretation of space weather as relevant to consequence and predictability. - Climate change – assessment of recent climatic variations; solar controls on climate change; quantitative modeling and testing of the greenhouse effect; and possible effects on planets and life. - Long-term variations of solar system environment – galactic rotation and secular variations; local supernovae. Related Web Pages Moon To Mars Commission The Bigger Picture Ray Bradbury: The Illustrated Spaceman What Would a Martian Drive?
https://www.astrobio.net/retrospections/moon-to-mars-whats-beyond/
Concrete Construction on the Moon Building bases on the Moon, according to Space Exploration Initiative, may start in the early 21st century. Concrete, as versatile as it has been, might become the prime material for construction.... A New Era in Space Operations The United States has embarked on a bold new course in space. We are in the process of deploying global missile defenses which promises to help realize an extended era of international... SEI In-Space Operations and Support Challenges The initial architectures and mission concepts for Space Exploration Initiative (SEI) are being studied to expand the manned space exploration capability. To support assessment of operational... Mitigation of Dust Contamination During EVA Operations on the Moon and Mars The Space Exploration initiative is charting a new course for human exploration of the Moon and Mars. Advances in EVA system have been identified as being critical for enabling long-duration... Advanced Construction Management for Lunar Base Construction—Surface Operation Planner The unprecedented task of constructing a lunar base could easily overwhelm any existing terrestrial construction management system. Couple this with the overall need for lunar surface... Operations Analysis for a Large Lunar Telescope With the launch of the Hubble Space Telescope a new generation of science optical observatories was made a reality. Astronomers and scientists have long seen space as having inherit advantages... Mars Via the Moon—A Robust Lunar Resources-Based Architecture A space operations architecture is outlined that uses lunar materials to cost-effectively support Lunar and Mars exploration and operations. This architecture uses a chemically powered... Characterization of Emplacement Strategies for Lunar and Mars Missions In planning missions to the Moon and Mars, a significant activity will be the delivery of the crew and necessary equipment to the planet surface to enable the mission objectives. Because... Assessment of a SSF Servicing Facility Evolution options for a servicing facility at Space Station Freedom (SSF) are currently being studied. While many choices exist, one promising idea is an enclosed facility or hangar which... Pressure Suit Requirements for Moon and Mars EVA's In this paper, we examine the influence of pressure suit and backpack designs on astronaut productivity and on the frequency with which EVA's can be conducted during lunar base operations... Utilization of On-Site Resources for Regenerative Life Support Systems at a Lunar Outpost Regenerative Life Support Systems (RLSS) will be required to regenerate air, water, and wastes, and to produce food for human consumption during long-duration stays on the Moon. It may... Medical Care on the Moon Eventually, people will return to the Moon to stay for prolonged periods of time. When they do, they will be exposed to a wide range of threats to their health including, decompression... Artificial Gravity Augmentation on the Moon and Mars Extended visits to the moon and Mars will require a base on the surface. Exploration of small planetary neighbors will depend upon the development of a life support system that prevents... System Concepts for a Series of Lunar Optical Telescopes The Lunar Telescope Working Group of the Marshall Space Flight Center (MSFC), NASA, has conducted conceptual studies of an evolutionary family of UV/optical/IR telescopes to be based on... Thermal Investigation of a Large Lunar Telescope Recent interest in construction of a large telescope on the Lunar surface (Nein and Davis, 1991; Bely, Burrows, and Illingworth, 1989) has prompted this feasibility study of a thermal... The Lunar Transit Telescope (LTT): An Early Lunar-Based Science and Engineering Mission The moon is an excellent spacecraft, with mass, rotational, thermal and orbital properties which make its utilization as a base for astronomical observations superbly reasonable. The Lunar... Lunar Transit Telescope Lander Design The lunar surface offers a unique platform for telescopes to observe distant planets. The absence of significant atmosphere provides clarity that is of orders of magnitude better when... Some Considerations for Instrumentation for a Lunar-Based Solar Observatory Outstanding problems in solar physics, observational trends and directions of instrumental development in solar astronomy are discussed briefly. These lead to the specification of observational... SALSA: A Lunar Submillimeter-Wavelength Array A conceptual design is described for a lunar submillimeter-wavelength interferometer called SALSA, a Synthesis Array for Lunar Submillimeter Astronomy. The intent is to describe SALSA... Very Low Frequency Radio Astronomy from Lunar Orbit This paper discusses the use of very low frequency aperture synthesis as a probe of astrophysical phenomena. Specifically, the science achievable with the Lunar Observer Radio Astronomy...
https://cedb.asce.org/CEDBsearch/records.jsp?terms=Moon&start=100
Creation of effective life support systems (LSSs) is one of the main tasks of medico-biological support of long-duration space flight. Principles of development of such an LSS will be defined on the basis of number of parameters, including mass-overall and energetic limitations of interplanetary spacecraft, duration of expedition and crew size. It is obvious that including biological subsystems in LSS of long-duration interplanetary space flights will help to form a full-fledged environment for humans in the spacecraft. It would be an appropriate solution for long-term biological needs of humans and important for elimination of possible negative consequences of their long stay under artificial (abiogenous) environment. Experiments with higher plants, conducted on board “MIR” orbital complex and Russian segment of ISS, showed that plant organisms are capable of long-duration normal growth, full development and reproduction without deviations under real space flight environment. These results allow us to assume that greenhouses are potential candidates to be a biological subsystem to be included in the LSS for interplanetary space flight. Inclusion of greenhouse equipment in the spacecrafts will require a number of corrective actions in functional schemes of the existing LSS, i.e. it will lead to redistribution of material streams inside an LSS and increase in functional load of authorized systems. Furthermore, involvement of greenhouse in an LSS of an interplanetary spacecraft requires a number of technical tasks to be cleared. In the present review, we discuss the constructive, technological and mass-transfer characteristics of greenhouse as a component part of the LSS for crews of long-term interplanetary missions, in particular, Mars expedition. Related URLs:
http://www.spacestationresearch.com/research/biological-component-of-life-support-systems-for-a-crew-in-long-duration-space-expeditions/
This is a new book by two astronomers, Donald Goldsmith and Martin Rees, in which the past and future of the exploration of the solar system and beyond is discussed in some detail. As the title suggests, the continued suitability of astronauts for this purpose is a running question throughout the book. Chapters cover projects involving near-earth orbits, the moon, Mars, asteroids, space colonization, the costs of space exploration and space law. No firm conclusions are reached, and the two simply finish by saying that the exploration of Mars is about to become the focus of many groups. The greatest single obstacle to hands-on space exploration is gravity, because a lot of energy is required to leave the earth's atmosphere. Furthermore, manned space flights are far more expensive than unmanned flights, because the systems necessary to support human life are much heavier than the systems necessary to support robots. Although there have been situations in which a human presence on a space mission has been more efficient than a robotic presence, advances in AI are rapidly closing the gap. One of the disadvantages of human astronauts is their susceptibility to cancer caused by various forms of radiation, which consequently requires heavy shielding on manned flights. Other human requirements in space also increase weight in comparison to robots. Because of the problem of the earth's gravity, the exploration of other planets and asteroids would be less costly and easier if the missions originated on the moon or other objects with weak gravity. For this reason, there will probably be permanent bases on the moon relatively soon. The far side of the moon would also be an excellent location for telescopes. The authors don't draw distinctions between missions based on scientific advancement, popular enthusiasm, billionaire hubris, commercial interests or geopolitics, so there is no clear perspective defining which activities are appropriate – I found this a little disappointing. They are simply predicting what is likely to happen next. So, in a few years there will be bases on the moon, and in about forty years there may be bases on Mars, the moons of other planets or asteroids. Among the motivators are international competition, the mining of rare elements, curiosity about whether life exists elsewhere in the solar system and the potential development of space habitats for humans. The moon contains helium-3, which could be used to generate energy. The moon, other moons, Mars and some asteroids contain water, which could be useful if bases or colonies are developed. Besides the possibility of human colonies on Mars, some people envision colonies in large, rotating cylinders in space or on the moons of other planets. The authors dutifully mention the hostility of non-earth environments to humans. On the whole, I found the book informative about a topic that is likely to become far more important in the future. However, the focus on technical facts omits many of the significant problems associated with non-earth habitation by humans. If the authors had consulted biologists and sociologists, they might have provided a fuller picture of the hazards of space for humans. To me, they have overlooked the fact that, as earth-evolved organisms, humans are unlikely to feel at home anywhere other than on earth or an extremely close simulation of it. I think that living in a Martian colony would probably be like living in a small, remote motel somewhere in Nevada, without the possibility of opening a window or going outside unless protected by a special suit. The authors discuss the terraforming of Mars, i.e., the conversion of Mars to an earth-like habitat. Although that could conceivably occur in the distant future, there is no guarantee that people would be happier there than they are here. Moreover, if humans were to leave earth because it became too crowded, polluted, hot or violent, why would anyone expect that space colonies wouldn't also become too crowded, polluted, hot or violent? If the colonists were trying to escape poor governance on earth, why would they think that they would find better governance in a space colony? I think that, with all the expense and risk associated with human travel to and residence in space, an analysis of what it would take to make living on earth more desirable and sustainable ought to have been made. We have the ability to painlessly reduce the population here by limiting the number of births, and we have the technology to solve the problems of climate change. In particular, it would be far easier to terraform earth, returning it to an earlier state, than Mars or anywhere else, and in this respect the book is extremely shortsighted. In a similar vein, the authors are neutral on speciation. It is true that speciation occurs on its own, as species adapt to changes in their environments, but, speaking for myself, I am perfectly happy being a human. As far as I'm concerned, Elon Musk and his friends can all become cyborgs and move to Mars. Good riddance!
https://www.doubttheexperts.com/2022/05/the-end-of-astronauts.html
NASA has announced how it plans to spend its $49.9 million Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) program funds. In total NASA received 1,621 proposals in response to its call for applications. It selected 399 research and technology proposals which it hopes will enable future missions into deep space and advancements in aviation science. The SBIR and STTR program’s selection of nearly 400 proposals for further development is a testament to NASA’s support of American innovation by small businesses and research institutions … This program provides opportunities for companies and institutions to commercialize their innovations while contributing to meeting NASA’s goals and objectives across all mission areas.Steve Jurczyk, associate administrator for the Space Technology Mission Directorate (STMD) at NASA Headquarters in Washington The proposals support the development of a broad range of technologies in aeronautics, science, human exploration and operations, and space technology. But, in a blog post, NASA highlighted some of the key projects. - High temperature superconducting coils for a future fusion reaction space engine. These coils are needed for the magnetic field that allows the engine to operate safely. Nuclear fusion reactions are what power our sun and other stars, and an engine based on this technology would revolutionise space flight - Advanced drilling technologies This would enable exploration of extraterrestrial oceans beneath the icy shells of the moons of Jupiter and Saturn, which can be miles thick. This is critical for detecting past or present life in these off-world oceans. - New wheels for planetary rovers These could dramatically improve mobility over a wide variety of terrains. This new design has multiple applications and could potentially impact any heavy-duty or off-road vehicle in diverse markets such as farming and defence. - Software-enabling collaborative control of multiple unmanned aircraft systems This could revolutionise how unmanned vehicles fly in close proximity to manned flights. These types of operations also are of interest to national security and disaster relief missions, including fire management. - A leading-edge manufacturing process that enables recycling of used or failed metal parts This would work by placing them into a press, producing a slab of metal, and machining it into a needed metal part in logistically remote environments, such as a space station or long-duration space mission.
https://www.borntoengineer.com/nasa-fusion-reactor-engines-extraterrestrial-drilling
Great Lakes Dredge and Dock Co. LLC is a specialty dredging contractor based in Oak Brook, IL, with regional offices and projects throughout North America. Our experience enables us to adapt quickly to unique problems and resolve potentially costly situations for our clients. In addition, we have the proven ability to customize systems and management teams for individual projects. Employees are an integral part of ensuring our success, and as such, our staff is required to meet the highest expectations. We take pride in our renowned safety standards and require employees to utilize safe work practices always. We count on our employees to perform in fast-paced environments while still satisfying customer needs and maintaining effective working relationships. The ability to proactively identify issues and offer innovative solutions is also expected from our employees. We are committed to our employees and, in turn, expect our employees to contribute to the success of Great Lakes Dredge and Dock. Job Summary: Under the general supervision of the Director of Health, Safety and Environmental, the Regional Safety Manager will directly report to the Senior Vice President of the Gulf Region and be in responsible for monitoring, coordinating and managing the company safety programs. As well as directing personnel and managing safety resources at the divisional level. This position will be based out of the Houston, TX office and will support the Gulf Region. The ideal candidate must have strong communication skills, both verbal and written, as well as, prior experience presenting information in a professional manner to external clients, craft field personal, and all levels of the management team, including senior leadership. This person will have proven to be successful at building rapport and sustaining productive and open internal and external relationships. Essential Duties & Responsibilities: - Ensures regional operations are planned and conducted in a safe and complaint manner in accordance with regulations & policies - Manage field safety personnel in support of regional operations (as assigned) - Represent Great Lakes Safety Department in a professional manner and be onsite point of contact for employees, clients and regulators - Monitor trending and conduct inspections, audits and manage corrective actions in regional facilities and on project sites - Coordinate, conduct and document safety trainings, meetings and incident review meetings - Provide technical safety solutions, develop safety plans and submittals, and procurement of safety equipment or supplies - Perform basic administrative duties using Microsoft Office Suite - Other tasks as assigned Requirements: - 5+ years’ experience performing the primary duties and responsibilities of this position - 5+ years’ experience in civil, marine or industrial construction, utility experience, strongly preferred - Incident management, regulatory action and inspection, and Jones Act experience, strongly preferred - Must be willing to travel 60% of the time. - Must be collaborative in nature, with the ability to bring fresh ideas to the team and engage in constructive conflict with the team - Must be able to identify with our personnel and have empathy for their challenges to provide solution Education & Certifications: - CSP/CHST or similar credential, preferred or similar credential - Occupational Health and Safety Certification or degree or similar degree in relevant discipline - OSHA 30 hour, required - USACE EM385 training/certification, preferred - Trainer certifications preferred: FA/CPR/AED, OSHA 500, Rigging, Confined Space, HAZWOPER - Maintain a valid Driver’s License with acceptable driving record required Physical Demands / Working Environment: - Ability to work outdoors in a variety of environments and conditions, walk/stand in areas with irregular surfaces - Ability to wear protective clothing and equipment including, but not limited to fall protection, respirators, FRC’s, etc. - Ability to squat and climb - Capable of travel via various types of aircraft, vehicles and vessels Benefits: - Competitive salary - 401(k) program that includes 100% company matching of the first 6% of employee contributions with immediate vesting. - Employee Stock Purchase Plan (ESPP). - Annual profit-sharing contributions by the company to participants’ 401(k) accounts based on company’s annual performance. - Medical, Dental, Prescription, Life and Disability insurance plans. Great Lakes Dredge & Dock Company is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment regardless of and will not be discriminated against on the basis of gender, sexual orientation, gender identity, race, color, ethnicity, national origin, religion, age, veteran status, disability status, genetic information or any other protected category.
https://www.gldd.com/job/regional-safety-manager-houston-tx/
Join Northrop Grumman on our continued mission to push the boundaries of possible across land, sea, air, space, and cyberspace. Enjoy a culture where your voice is valued and start contributing to our team of passionate professionals providing real-life solutions to our world’s biggest challenges. We take pride in creating purposeful work and allowing our employees to grow and achieve their goals every day by Defining Possible. With our competitive pay and comprehensive benefits, we have the right opportunities to fit your life and launch your career today. Northrop Grumman Aeronautics Systems is seeking an experienced Executive Assistant to the Vice President & General Manager Air Dominance (VPGM). This position is located in Palmdale, California. As a valued team member, the Executive Assistant is in the center of day-to-day operations for the division headquarters and will be welcomed into a team environment where contributions, growth and mutual support is the standard. The Executive Assistant supports the VPGM and coordinates engagements across the division leadership and executive support teams to support division objectives. The selected candidate must demonstrate organizational skills, self-motivation, and have an ownership mindset with the ability to accomplish high-impact duties in time sensitive, dynamic environment. This position may often require occasional nonstandard work hours to support division priorities. Duties and responsibilities will include but are not limited to: - Performing complex administrative assignments that support the division’s battle rhythm. - Maintaining a welcoming environment and provide high touch support to executives, functional and program leaders, and employees at all levels when representing the VPGM’s office. - Helping the division VPGM manage and prioritize calendar of division meetings, and participation in division and sector meetings or events, anticipate schedule challenges and proactively address conflicts to enable seamless integration of day-to-day operations. - Completing travel arrangements and ensure that all travel expenses are appropriately expensed. - Juggling multiple complex assignments that require strong time management and project management skills, emotional intelligence, political savviness, discretion and integrity. - Working with sensitive information necessitating tact, good judgement and problem-solving skills while maintaining the highest levels of confidentiality. - Working in a high energy, collaborative, diverse, professional teams to complete a wide variety of tasks independently with minimal supervision. - Demonstrate focus when operating in a fast-moving environment and managing information flow in a timely and accurate manner. Basic Qualifications: - High school diploma and a minimum of six years additional education and/or experience in the administrative professional field or a bachelor’s degree with two years' experience in the administrative professional field. - Computer skills required include advanced expertise in Microsoft Office software (Word, PowerPoint, Outlook, Excel, and Teams), SharePoint and intranet/internet proficiency. - Must have experience with travel and expense reporting systems (Concur or similar, ITRIP) - Ability to efficiently coordinate Outlook calendar and other routinely used scheduling tools. - Prior experience coordinating both on and off-site meetings and/or events. Preferred Qualifications: - Ability to swiftly adapt to new tools/technologies - Experience participating and hosting in-person and remote Video Teleconference meetings (i.e., Zoom, Skype, Teams, etc.) - Must have the ability to independently compile and generate reports/presentations. - Experience writing, proofreading and correcting documents. - Expert level proficiency with oral and written communication skills. - Ability to interface with executive level contacts with considerable autonomy. - Demonstrated ability to manage multiple administrative projects and initiatives; experience in supporting a variety of executive levels, management level and administrative support within an organization. The health and safety of our employees and their families is a top priority. The company encourages employees to remain up-to-date on their COVID-19 vaccinations. U.S. Northrop Grumman employees may be required, in the future, to be vaccinated or have an approved disability/medical or religious accommodation, pursuant to future court decisions and/or government action on the currently stayed federal contractor vaccine mandate under Executive Order 14042 https://www.saferfederalworkforce.gov/contractors/. Northrop Grumman is committed to hiring and retaining a diverse workforce. We are proud to be an Equal Opportunity/Affirmative Action Employer, making decisions without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability, or any other protected class. For our complete EEO/AA and Pay Transparency statement, please visit http://www.northropgrumman.com/EEO. U.S. Citizenship is required for most positions. Apply Now What's great about Northrop Grumman - Be part of a culture that thrives on intellectual curiosity, cognitive diversity and bringing your whole self to work. - Use your skills to build and deliver innovative tech solutions that protect the world and shape a better future. - Enjoy benefits like work-life balance, education assistance and paid time off. Did you know? Northrop Grumman leads the industry team for NASA’s James Webb Space Telescope, the largest, most complex and powerful space telescope ever built. Launched in December 2021, the telescope incorporates innovative design, advanced technology, and groundbreaking engineering, and will fundamentally alter our understanding of the universe.
https://www.northropgrumman.com/jobs/Palmdale-----California/Administrative-Services/R10081861/administrative-assistant-to-vp-2/
The Swift mission will determine the origin of GRB, classify GRBs and search for new types, study the interaction of the ultrarelativistic outflows of GRBs with their surrounding medium, and use GRBs to study the early universe out to z >10. TLDR HIGH-REDSHIFT GALAXIES IN THE HUBBLE DEEP FIELD : COLOUR SELECTION AND STAR FORMATION HISTORY TO Z 4 - P. Madau, H. Ferguson, M. Dickinson, M. Giavalisco, C. Steidel, A. Fruchter - Physics - 31 July 1996 The Lyman decrement associated with the cumulative effect of H I in QSO absorption systems along the line of sight provides a distinctive feature for identifying galaxies at z ≳ 2.5. Colour criteria,… THE HUBBLE SPACE TELESCOPE CLUSTER SUPERNOVA SURVEY. V. IMPROVING THE DARK-ENERGY CONSTRAINTS ABOVE z > 1 AND BUILDING AN EARLY-TYPE-HOSTED SUPERNOVA SAMPLE We present Advanced Camera for Surveys, NICMOS, and Keck adaptive-optics-assisted photometry of 20 Type Ia supernovae (SNe Ia) from the Hubble Space Telescope (HST) Cluster Supernova Survey. The SNe… A PHOTOMETRIC REDSHIFT OF z ∼ 9.4 FOR GRB 090429B - A. Cucchiara, A. Levan, P. D’Avanzo - Physics - 25 May 2011 Gamma-ray bursts (GRBs) serve as powerful probes of the early universe, with their luminous afterglows revealing the locations and physical properties of star-forming galaxies at the highest… High-Redshift Supernova Rates - T. Dahlén, L. Strolger, J. Tonry - Physics - 24 June 2004 We use a sample of 42 supernovae detected with the Advanced Camera for Surveys on board the Hubble Space Telescope as part of the Great Observatories Origins Deep Survey to measure the rate of… Long γ-ray bursts and core-collapse supernovae have different environments - A. Fruchter, A. Levan, S. Woosley - PhysicsNature - 20 March 2006 When massive stars exhaust their fuel, they collapse and often produce the extraordinarily bright explosions known as core-collapse supernovae. On occasion, this stellar collapse also powers an even… An Extremely Luminous Panchromatic Outburst from the Nucleus of a Distant Galaxy Multiwavelength observations of a unique γ-ray–selected transient detected by the Swift satellite, accompanied by bright emission across the electromagnetic spectrum, and whose properties are unlike any previously observed source are presented. TLDR A Possible Relativistic Jetted Outburst from a Massive Black Hole Fed by a Tidally Disrupted Star - J. Bloom, D. Giannios, A. J. van der Horst - PhysicsScience - 16 April 2011 Observations suggest a sudden accretion event onto a central MBH of mass about 106 to 107 solar masses, which leads to a natural analogy of Sw 1644+57 to a temporary smaller-scale blazar. TLDR A millisecond pulsar in an eclipsing binary - A. Fruchter, D. Stinebring, J. Taylor - PhysicsNature - 1 May 1988 We have discovered a remarkable pulsar with period 1.6 ms, moving in a nearly circular 9.17-h orbit around a low-mass companion star. At an observing frequency of 430 MHz, the pulsar, PSR1957 + 20,… A γ-ray burst at a redshift of z ≈ 8.2 Long-duration γ-ray bursts (GRBs) are thought to result from the explosions of certain massive stars, and some are bright enough that they should be observable out to redshifts of z > 20 using… ... ...
https://www.semanticscholar.org/author/A.-Fruchter/6208068
Plants are critical in supporting life on Earth, and with help from an experiment that flew onboard space shuttle Discovery's STS-131 mission, they also could transform living in space. NASA's Kennedy Space Center partnered with the University of Florida, Miami University in Ohio and Samuel Roberts Noble Foundation to perform three different experiments in microgravity. The studies concentrated on the effects microgravity has on plant cell walls, root growth patterns and gene regulation within the plant Arabidopsis thaliana. Each of the studies has future applications on Earth and in space exploration. "Any research in plant biology helps NASA for future long-range space travel in that plants will be part of bioregenerative life support systems," said John Kiss, one of the researchers who participated in the BRIC-16 experiment onboard Discovery's STS-131 flight in April 2010 and a distinguished professor and chair of the Department of Botany at Miami University in Ohio. The use of plants to provide a reliable oxygen, food and water source could save the time and money it takes to resupply the International Space Station (ISS), and provide sustainable sources necessary to make long-duration missions a reality. However, before plants can be effectively utilized for space exploration missions, a better understanding of their biology under microgravity is essential. Kennedy partnered with the three groups for four months to provide a rapid turnaround experiment opportunity using the BRIC-16 in Discovery's middeck on STS-131. And while research takes time, the process was accelerated as the end of the Space Shuttle Program neared. Howard Levine, a program scientist for the ISS Ground Processing and Research Project Office and the science lead for BRIC-16, said he sees it as a new paradigm in how NASA works spaceflight experiments. The rapid turnaround is quite beneficial to both NASA and the researchers, saving time and money. Each of the three groups was quite impressed with the payload processing personnel at Kennedy. Kiss said the staff at the Space Life Sciences Lab at Kennedy did an outstanding job and that the experienced biologists and engineers were extremely helpful with such a quick turnaround. Kiss and his group published a paper on their initial findings of plant growth in microgravity in the October 2011 issue of the journal Astrobiology. They found that roots of space-grown seedlings exhibited a significant difference compared to the ground controls in overall growth patterns in that they skewed in one direction. Their hypothesis is that an endogenous response in plants causes the roots to skew and that this default growth response is largely masked by the normal gravity experienced on the Earth's surface. "The rapid turnaround was quite challenging, but it was a lot of fun," said Anna-Lisa Paul, research associate professor in the Department of Horticultural Sciences at the University of Florida. "The ability to conduct robust, replicated science in a time frame is comparable to the way we conduct research in our own laboratories, which is fundamentally a very powerful system." Paul's research and that of her colleague Robert Ferl, professor at the University of Florida and co-principal investigator on the BRIC-16 experiment, focused on comparing patterns of gene expression between Arabidopsis seedlings and undifferentiated Arabidopsis cells, which lack the normal organs that plants use to sense their environment - like roots and leaves. Paul and Ferl found that even undifferentiated cells "know" they are in a microgravity environment, and further, that they respond in a way that is unique compared to plant seedlings. Elison Blancaflor, associate professor at the Samuel Roberts Noble Foundation, discovered that plant genes encoding cell-wall structural proteins were significantly affected by microgravity. "This is exciting because this research has given us the tools to begin working on designing plants that perform better on Earth and in space," Blancaflor said. Blancaflor has now extended his findings from BRIC-16 to generate new hypotheses to explain basic plant-cell function. For example, the BRIC-16 results led the Noble Foundation team to identify novel components of the molecular machinery that allow plant cells to grow normally. According to Levine, plants could contribute to bioregenerative life support systems on long-duration space missions by automatically scrubbing carbon dioxide, creating oxygen, purifying water and producing food. "There is also a huge psychological benefit of growing plants in space," said Levine. "When you have a crew floating around in a tin can, a plant is a little piece of home they can bring with them."
https://www.nasa.gov/mission_pages/station/research/BRIC-16.html
UMD Astronomy Assistant Professor Eliza Kempton co-authored report that outlines a long-term strategy to explore distant planets that might harbor life Within the past decade, astronomers have discovered thousands of planets orbiting stars outside our solar system. Ranging in size from smaller than Earth’s moon to several times larger than Jupiter, these planets—known as extrasolar planets or exoplanets—represent a new frontier in space exploration. Many questions about exoplanets and their host stars remain unanswered. For example, do these planetary systems resemble our solar system, or have they taken on a wide variety of sizes and structures? Do any of these exoplanets have the right conditions to support life? If so, has life already evolved there? To answer these and other questions about distant planetary systems, a new congressionally mandated report by the National Academies of Sciences, Engineering, and Medicine, co-authored by University of Maryland Astronomy Assistant Professor Eliza Kempton, recommended that NASA should lead a large, long-term direct imaging mission. At present, most exoplanet observations rely on indirect methods, such as measuring changes in the light from a planet’s host star during the planet’s orbital cycle. To gain the information required to tackle complex questions about exoplanets, the report suggested that the future mission should be centered on an advanced space telescope capable of directly imaging smaller, Earth-like exoplanets that orbit stars similar to the sun. “My expertise is primarily in the theory of exoplanet atmospheres, with a focus on small planets,” Kempton said. “A big goal of the report was to make a roadmap for the next decade plus, and we decided we didn’t want to step away from being very ambitious. A primary goal is to characterize planets that can and do bear life—a push toward finding the next Earth.” A recent addition to UMD’s faculty, Kempton was previously an assistant professor of physics at Grinnell College in Grinnell, Iowa. In her research, she uses theory to predict what astronomers should expect to observe from exoplanets with specific atmospheric compositions—especially those that are slightly larger than Earth, often referred to as super-Earths. In 2010, Kempton co-authored a paper in the journal Nature that described the first observation of a super-Earth atmosphere. She also works with observational astronomers to interpret their results based on theoretical predictions. Since completing her Ph.D. at Harvard in 2009, she has served on several NASA and National Science Foundation advisory committees, helping to determine exoplanet research priorities for the Hubble Space Telescope and other observing facilities. Kempton and the other committee members who authored the National Academies report identified two overarching goals in exoplanet research: - To understand the formation and evolution of planetary systems as products of star formation and to characterize the diversity of their architectures, composition and environments; and - To learn enough about exoplanets to identify potentially habitable environments and to search for scientific evidence of life on worlds orbiting other stars. Based on these goals, the committee found that current knowledge of planets outside the solar system is substantially incomplete. To search for evidence of past and present life elsewhere in the universe, the research community will need a comprehensive approach to studying habitability in exoplanets using both theory and observations, according to the report. The committee recognized that developing a direct imaging capability will require large financial investments over more than a decade to see results. To detect a planetary system analogous to our own, the report recommended using instruments that enable direct imaging of an exoplanet by blocking the light emitted by the parent star. “Planning is already underway for the large next-generation telescopes that will follow the Webb Telescope,” Kempton explained, referring to possible successors to NASA’s highly anticipated James Webb Space Telescope mission, scheduled for launch in 2021. “Of these proposed missions, three have exoplanet science and habitable exoplanets among their key goals. Two of those are large direct imaging missions that would be able to take pictures of Earth-like planets orbiting near their host stars.” In addition, ground-based astronomy—enabled by two U.S.-led telescopes—will also play a pivotal role, the report said. The Giant Magellan Telescope (GMT) being built in Chile and the proposed Thirty Meter Telescope (TMT) would enable profound advances in the imaging and spectroscopy of entire planetary systems. They may also be able to detect molecular oxygen in the atmosphere of Earth-like planets orbiting nearby small stars, the report said. The report said that NASA’s Wide Field Infrared Survey Telescope (WFIRST), the large space-based mission that received the highest priority in the Academies' 2010 decadal survey, will play two extremely valuable roles: first, it will permit a survey of planets farther from their stars than those surveyed by NASA’s Kepler spacecraft and other missions. Second, it will enable a large direct imaging mission. In addition to such forward-looking plans, Kempton noted that the Webb Telescope will play a significant role in the effort as soon as it is launched. “The scientific returns from these missions will be significant. We’ll be getting data from the Webb Telescope’s Early Release Science program basically right away,” Kempton explained. “I’ve been waiting my whole career to measure the composition of small planets’ atmospheres. We’ll have those data very soon, so it’s a very exciting time.” VIDEO: Exoplanet Science Strategy (The National Academies of Sciences, Engineering, and Medicine): ### This release was adapted from text provided by the National Academies of Sciences, Engineering, and Medicine. To view the original release, including additional technical information and a full listing of the committee members, please visit: http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=25187 Media Relations Contact: Matthew Wright, 301-405-9267, [email protected] University of Maryland College of Computer, Mathematical, and Natural Sciences 2300 Symons Hall College Park, MD 20742 www.cmns.umd.edu @UMDscience About the College of Computer, Mathematical, and Natural Sciences The College of Computer, Mathematical, and Natural Sciences at the University of Maryland educates more than 9,000 future scientific leaders in its undergraduate and graduate programs each year. The college's 10 departments and more than a dozen interdisciplinary research centers foster scientific discovery with annual sponsored research funding exceeding $175 million.
https://cmns.umd.edu/news-events/features/4231
Strange communities of extreme microbes in hostile, otherworldly environments have been discovered at the bottom of the sea in the Gulf of Mexico, scientists announced today. They found reddish microbial oozings, along with volcanoes, craters, bubbling gas, mud pots and even dancing eels that seemed to follow the window of a moving deep-sea submersible craft. "These are the kind of scenes I imagine could exist on distant planets or other astronomical bodies," said Samantha Joye, a University of Georgia researcher who led an expedition to explore deep-sea mud volcanoes nearly 2,000 feet (600 meters) beneath the ocean surface. Life as we know it Such deep-sea vents are candidates as birthplaces for the first life on Earth, other studies have concluded. Yet little is known about them. Only about 5 percent of the world's ocean floors have been explored. By contrast, the far side of the moon is well mapped. Results of the study, detailed in the April 6 issue of the journal Nature Geosciences, have implications for understanding early life on Earth as well as the potential for life on Mars and other worlds, such as Jupiter's Europa, where similarly extreme conditions might support microbiological life. "Here we have more fascinating examples of microbial life coping with very, very unusual environments — regions of the ocean deeps that we can't help but describe as extreme or harsh," said Phillip Taylor, head of the National Science Foundation's Ocean Section. "Yet life has clearly adapted to exist, even thrive, in these systems. Such discoveries can't help but lead us to think that life beyond Earth is probable. Also, the discoveries of the evolved strategies for survival in unique environments have the potential to yield new uses of microbial processes and products in the biotech arena." What's down there The expedition, taking researchers to the seafloor in small submarines, examined an area in the Gulf of Mexico where clusters of seafloor vents spew mud, oil, brine and gases that support food chains independently of the sun, which gives life to much of the biology on the planet. Brine pools are ponds of hyper-saline water that fill a seafloor depression without mixing with overlying seawater. These types of ecosystems — which have only rarely been studied by microbiologists or visited by anyone — are particularly hostile to much of life because they are devoid of light and oxygen, and are super-salty and bathed in noxious gases. Nevertheless, researchers found that a mud volcano and a brine pool that each support dynamic microbial communities. These microbial communities are not only distinct from each other but are also distinct from the microbial communities that live in the surrounding ocean. "Near mud volcanoes, we saw thick plumes of gas bubbles ejected from boiling mud pots that are similar to those found in Yellowstone National Park," Joye explained. "These gas plumes, consisting mainly of methane, extended hundreds of meters from the sea floor." Fast to adapt The volcanoes erupt as often as daily, and the mud volcanoes and brine pools last just tens of thousands of years — short by geological standards. That means life must adapt quickly to survive. The scientists found evidence of this. "The diversity and distribution of the microbes we studied say a lot about how life adapts to extreme environments," Joye said. "We believe that the composition of the microbial communities and their metabolisms are linked to environmental differences, mainly in the geochemistry and intensity and frequency of fluid expulsion between the sites." The rapidly changing environments raise new questions. "If these microbial communities are unique to each extreme environment, then how do the microbes that live in mud volcanoes reach and colonize these remote ecosystems in the first place or, for that matter, locate other mud volcanoes?" said Lita Proctor, an NSF program director. "Do they patiently wait in the ocean floor until a new mud volcano bursts through, or do they somehow migrate between mud volcanoes?"
https://www.livescience.com/3461-otherworldly-scenes-seafloor.html
As NASA develops concepts for longer crewed missions to Mars and beyond, the agency will need innovative and sustainable food systems that check all the boxes. The Deep Space Food Challenge, a NASA Centennial Challenge, seeks ideas for novel food production technologies or systems that require minimal resources and produce minimal waste, while providing safe, nutritious, and tasty food for long-duration human exploration missions. Solutions from this challenge could enable new avenues for food production around the world, especially in extreme environments, resource-scarce regions, and in new places like urban areas and in locations where disasters disrupt critical infrastructure.
https://spaceleaks.com/story/nasa-deep-space-food-challenge-opens/
Uncrewed Systems and Their Role in the Energy Transition The emergence of reliable, affordable, and accessible uncrewed systems reveals their potential to play a valuable role in the energy transition. Autonomous aircraft, robots, vehicles, and boats have come out of the workshop and are now appearing as routine tools in a wide range of challenging environments where they are conducting an ever-growing array of complex observations, measurements, and work. Drawing upon recent trials and deployments, this SPE Live explores some of the ways in which autonomous systems can improve how we work, provided that the fundamental rules of safe and reliable operations are maintained and the decision to use advanced technology is justified by the benefits that it brings. Examples presented include the use of autonomous aircraft for measuring greenhouse-gas emissions, long-duration offshore surveys, and safety-critical inspections in hazardous locations. The autonomous systems can only be as good as the payload that they carry. This presentation presents the concept of "horse and rider," in which the sensors and tools (the rider) are viewed as working in partnership with the mode of transport (the horse). Together, they define the workplan and method. Working in this way allows rapid innovation, allowing technology providers to specialize and adapt rapidly to new opportunities. - Peter Evans, Senior Engineer, BP - Chris Adams, Commercial Director, Flylogix This video was first presented as an SPE Live event. More SPE Live videos can be found on the SPE Energy Stream.
https://jpt.spe.org/uncrewed-systems-and-their-role-in-the-energy-transition
In our perfect future, members of society are educated and healthy, live in concert with a thriving natural environment, and have access to their needs via resilient infrastructure. Technology is employed to improve and protect human and environmental health, providing economic opportunities for all, while unintended consequences are understood and minimized. This future is a place of harmony and balance based on the principles of sustainability and resiliency. We, the Department of Civil & Environmental Engineering at PSU, believe we have the ability to advance the scientific knowledge to realize this future. Our vision is to be known for employing the most advanced experimental and computational technologies in a sensible way to produce outstanding research, provide holistic education, and implement the created knowledge to serve society and the environment broadly. As civil and environmental engineers, we research, design, monitor, maintain, and repair the natural and built environment in ways that: 1) support development of healthy and safe environments and 2) create infrastructure systems that provide access to places, resources, and information. We use technologies to serve society and promote the health, comfort, connectivity and happiness of people. We help protect and sustain the environment. Our mission is to provide the tools, information, and talented engineers to build a more resilient, sustainable, and equitable future that supports and enriches daily life with efficient use of resources. We aim to develop sensible solutions that are low-imprint, high impact. These solutions should minimize energy and resource use, disturbance of communities and natural systems, and human distraction. They are simple, inspired by nature, and beautiful. By asking the right questions, applying fundamental principles, and using feedback from many stakeholders, we strive to create such solutions. We understand that in a rapidly changing world, it is critical that we can distinguish between useful information and noise, between what is essential and unimportant, and how to distill trustworthy information. Our culture supports a transdisciplinary research environment with a diverse set of collaborators. We work with experts within and outside our department to understand the bigger picture, define problems together, and combine our knowledge and tools to find appropriate solutions. We pose these questions not alone but together with others. In support of our vision and to promote our mission, the department will be renowned for scientifically sound, socially and environmentally conscious research and education in three areas: - Prevention, planning, and management of extraordinary events; - Evaluation, development, and deployment of technologies that promote a better society; and - Creation and maintenance of healthy environments All of these focal areas have elements that overlap and build on our current expertise, reinforcing our ability to make impactful contributions that support the advancement of society. Our location in the living laboratory of Portland, OR and the Pacific Northwest is advantageous as both are national leaders in these emerging areas and we have strong relationships with public agencies and other potential partners. More details about these research areas are provided below. Prevention, planning, and management of extraordinary events We strive to minimize the risks and uncertainties from natural and man-made hazards such as extreme events and environmental degradation, including those brought on by global climate change and regional perturbations of climate change. In CEE, we aim to research and create new knowledge about how and why extraordinary events occur, how systems adapt and change, and what mitigations are possible and appropriate. By creating highly skilled engineers and developing alternative materials, nature-inspired solutions, and resilient infrastructure, we can help communities plan, adapt, and even thrive in the face of systemic perturbations. Our research and education goals are not merely aimed at minimizing the loss of life, but restoring vitality to human and natural systems as a consequence. As we live in and operate from the Pacific Northwest, this research focus area is grounded in the reality of our unique set of regional assets and risks, which imposes a sense of urgency that propels us forward. Evaluation, development, and deployment of technologies that promote a better society Automated and connected cyber-physical systems have the potential to shape society in a number of ways: improve public health, safety, and security; foster environmental understanding and stewardship; improve efficiency and productivity; and provide data and analysis to support better decision making. However, these disruptive forces also pose societal risks and ethical hazards. In CEE we believe we can propose a vision in which we embrace cyber-physical systems in targeted ways that promote a healthy and informed society. To this end, we will work with other disciplines to develop appropriate technologies, test them in our living laboratory, train engineers, and strive to diminish unintended consequences. Creation and maintenance of healthy environments The origins of civil and environmental engineering are rooted in promoting and protecting human and environmental health. As a discipline, we are well positioned to tackle our current challenges by learning from nature, working with communities, finding elegant solutions, and training future engineers to utilize these approaches. Our department works to monitor changes in our natural and built environment and understand the implications of those changes for humans and ecosystems. We work to find creative solutions to prevent and mitigate the degradation of our natural resources and to continually improve the surroundings we design. At a time when our planet is in distress, we pursue research to support actions that will mitigate harm to natural ecosystems and create environments that support human wellbeing.
https://www.pdx.edu/civil-environmental-engineering/vision-statement
A satellite is orbiting a distant planet of mass 9.6e25 kg and radius 86700 km. The satellite orbits at a height of 8610 km above the surface of the planet. - 👍 - 👎 - 👁 - ℹ️ - 🚩 - - 👍 - 👎 - ℹ️ - 🚩 Answer this Question Similar Questions - Physics A synchronous satellite, which always remains above the same point on a planet's equator, is put in orbit around Neptune so that scientists can study a surface feature. Neptune rotates once every 16.1 h. Use the data of Table 13.2 to find the altitude of - Math Ms. Sue please help! A radio signal travels at 3.00 • 10^8 meters per second. How many seconds will it take for a radio signal to travel from a satellite to the surface of Earth if the satellite is orbiting at a height of 3.54 • 10^7 meters? Show your - PHYSICS Astronauts on a distant planet set up a simple pendulum of length 1.2 m. The pendulum executes simple harmonic motion and makes 100 complete oscillations in 450 s. What is the magnitude of the acceleration due to gravity on this planet? - Math Radio signals travel at a rate of 3x10^8 meters per second.how many seconds would it take for a radio signal to travel from a satellite to the surface of earth if the satellite is orbiting at a height of 3.6x10^7 meters? A. 8.3 seconds B. 1.2x10^-1 seconds - physics A satellite has a mass of 5850 kg and is in a circular orbit 3.8 105 m above the surface of a planet. The period of the orbit is two hours. The radius of the planet is 4.15 106 m. What is the true weight of the satellite when it is at rest on the planet's - Physics A communications satellite with a mass of 450 kg is in a circular orbit about the Earth. The radius of the orbit is 2.9×10^4 km as measured from the center of the Earth. Calculate the weight of the satellite on the surface of the Earth Calculate the - Science check answer quick! I really do not understand Number 7 Which of these does Newton's law of universal gravitation imply? (Points : 1) The force of gravity between two objects is inversely proportional to the product of the two masses. - Science Imagine Two Artificial Satellites Orbiting Earth At The Same Distance. One Satellite Has A Greater Mass Than The Other One? Which Of The Following Would Be True About Their Motion? A.The satellite with the greater mass is being pulled toward Earth w/ less - physics An artificial satellite circling the Earth completes each orbit in 139 minutes. (The radius of the Earth is 6.38 106 m. The mass of the Earth is 5.98 1024 kg.) (a) Find the altitude of the satellite. - Physics A satellite that is orbiting Earth at an altitude of 600 km. Use a mass of 5.98×1024 kg and a radius of 6.378×106 m for Earth. The magnitude of the satellites acceleration is ___m/s2 [down]. Still need help? You can ask a new question or browse existing questions.
https://questions.llc/questions/1591344/a-satellite-is-orbiting-a-distant-planet-of-mass-9-6e25-kg-and-radius-86700-km-the
At their closest approach, Venus and Earth are 4.2x10^10m apart. The mass of Venus is 4.87x10^24kg, the mass of Earth is 5.97x10^24kg, and G=6.67x10^-11Nm2/kg2. What is the force exerted by Venus on Earth at that point? - Physics Sally has a mass of 50.0kg and earth has a mass of 5.98X10^24kg. The radius of earth is 6.371x10^6m. A)What is the force of gravitational attraction between sally and earth? B)What is Sally's weight? - Science..physics The moon is the Earth's nearest neighbour in space. The radius of the moon is approximately one quarter of the Earth's radius, and its mass is one eightieth of the Earth's mass. a) Culculate the weight of a man with a mass of 80kg - math Earth's mass, in kilograms, is 5.97x10^24. the moon's mass is 7.34x10^22. How many times greater is Earth's mass (in scientific notation)? - physics A newly-discovered plant, "Cosmo", has a mass that is 4 times the mass of the Earth. The radius of the Earth is Re. The gravitational field strength at the surface of Cosmo is equal to that at the surface of the Earth if the - Physics The radius of moon is 27% of the earth radius and its mass is 1.2 % of the earth mass find the acc due to gravity on surface of moon? - Physics Please help! Calculate the position of the center of mass of the following pairs of objects. Use acoordinate system where the origin is at the center of the more massive object. Give youranswer not in meters but as a fraction of the radius as - Physics Given that the Earth’s mass is 5.98X10^24kg, and the radius of the moon’s orbit around the Earth is approximately 3.85X10^8m, calculate: a) The speed with which the moon orbits the Earth. b) The Orbital period of the moon in - Phyiscs A satellite circles the earth in an orbit whose radius is 2.07 times the earth's radius. The earth's mass is 5.98 x 1024 kg, and its radius is 6.38 x 106 m. What is the period of the satellite?
https://www.jiskha.com/questions/476782/earth-has-a-mass-of-5-97x10-24kg-and-a-radius-of-6-38x10-6m-find-the-wight-of-a-65-0kg
- State Kepler’s laws of planetary motion. - Derive the third Kepler’s law for circular orbits. - Discuss the Ptolemaic model of the universe. Examples of gravitational orbits abound. Hundreds of artificial satellites orbit Earth together with thousands of pieces of debris. The Moon’s orbit about Earth has intrigued humans from time immemorial. The orbits of planets, asteroids, meteors, and comets about the Sun are no less interesting. If we look further, we see almost unimaginable numbers of stars, galaxies, and other celestial objects orbiting one another and interacting through gravity. All these motions are governed by gravitational force, and it is possible to describe them to various degrees of precision. Precise descriptions of complex systems must be made with large computers. However, we can describe an important class of orbits without the use of computers, and we shall find it instructive to study them. These orbits have the following characteristics: - A small mass orbits a much larger mass . This allows us to view the motion as if were stationary—in fact, as if from an inertial frame of reference placed on —without significant error. Mass is the satellite of , if the orbit is gravitationally bound. - The system is isolated from other masses. This allows us to neglect any small effects due to outside masses. The conditions are satisfied, to good approximation, by Earth’s satellites (including the Moon), by objects orbiting the Sun, and by the satellites of other planets. Historically, planets were studied first, and there is a classical set of three laws, called Kepler’s laws of planetary motion, that describe the orbits of all bodies satisfying the two previous conditions (not just planets in our solar system). These descriptive laws are named for the German astronomer Johannes Kepler (1571–1630), who devised them after careful study (over some 20 years) of a large amount of meticulously recorded observations of planetary motion done by Tycho Brahe (1546–1601). Such careful collection and detailed recording of methods and data are hallmarks of good science. Data constitute the evidence from which new interpretations and meanings can be constructed. Kepler’s Laws of Planetary Motion Kepler’s First Law The orbit of each planet about the Sun is an ellipse with the Sun at one focus. Kepler’s Second Law Each planet moves so that an imaginary line drawn from the Sun to the planet sweeps out equal areas in equal times (see (Figure)). Kepler’s Third Law The ratio of the squares of the periods of any two planets about the Sun is equal to the ratio of the cubes of their average distances from the Sun. In equation form, this is where is the period (time for one orbit) and is the average radius. This equation is valid only for comparing two small masses orbiting the same large one. Most importantly, this is a descriptive equation only, giving no information as to the cause of the equality. Note again that while, for historical reasons, Kepler’s laws are stated for planets orbiting the Sun, they are actually valid for all bodies satisfying the two previously stated conditions. Given that the Moon orbits Earth each 27.3 d and that it is an average distance of from the center of Earth, calculate the period of an artificial satellite orbiting at an average altitude of 1500 km above Earth’s surface. Strategy The period, or time for one orbit, is related to the radius of the orbit by Kepler’s third law, given in mathematical form in . Let us use the subscript 1 for the Moon and the subscript 2 for the satellite. We are asked to find . The given information tells us that the orbital radius of the Moon is , and that the period of the Moon is . The height of the artificial satellite above Earth’s surface is given, and so we must add the radius of Earth (6380 km) to get . Now all quantities are known, and so can be found. Solution Kepler’s third law is To solve for , we cross-multiply and take the square root, yielding Substituting known values yields Discussion This is a reasonable period for a satellite in a fairly low orbit. It is interesting that any satellite at this altitude will orbit in the same amount of time. This fact is related to the condition that the satellite’s mass is small compared with that of Earth. People immediately search for deeper meaning when broadly applicable laws, like Kepler’s, are discovered. It was Newton who took the next giant step when he proposed the law of universal gravitation. While Kepler was able to discover what was happening, Newton discovered that gravitational force was the cause. Derivation of Kepler’s Third Law for Circular Orbits We shall derive Kepler’s third law, starting with Newton’s laws of motion and his universal law of gravitation. The point is to demonstrate that the force of gravity is the cause for Kepler’s laws (although we will only derive the third one). Let us consider a circular orbit of a small mass around a large mass , satisfying the two conditions stated at the beginning of this section. Gravity supplies the centripetal force to mass . Starting with Newton’s second law applied to circular motion, The net external force on mass is gravity, and so we substitute the force of gravity for : The mass cancels, yielding The fact that cancels out is another aspect of the oft-noted fact that at a given location all masses fall with the same acceleration. Here we see that at a given orbital radius , all masses orbit at the same speed. (This was implied by the result of the preceding worked example.) Now, to get at Kepler’s third law, we must get the period into the equation. By definition, period is the time for one complete orbit. Now the average speed is the circumference divided by the period—that is, Substituting this into the previous equation gives Solving for yields Using subscripts 1 and 2 to denote two different satellites, and taking the ratio of the last equation for satellite 1 to satellite 2 yields This is Kepler’s third law. Note that Kepler’s third law is valid only for comparing satellites of the same parent body, because only then does the mass of the parent body cancel. Now consider what we get if we solve for the ratio . We obtain a relationship that can be used to determine the mass of a parent body from the orbits of its satellites: If and are known for a satellite, then the mass of the parent can be calculated. This principle has been used extensively to find the masses of heavenly bodies that have satellites. Furthermore, the ratio should be a constant for all satellites of the same parent body (because ). (See (Figure)). It is clear from (Figure) that the ratio of is constant, at least to the third digit, for all listed satellites of the Sun, and for those of Jupiter. Small variations in that ratio have two causes—uncertainties in the and data, and perturbations of the orbits due to other bodies. Interestingly, those perturbations can be—and have been—used to predict the location of new planets and moons. This is another verification of Newton’s universal law of gravitation. Newton’s universal law of gravitation is modified by Einstein’s general theory of relativity, as we shall see in Particle Physics. Newton’s gravity is not seriously in error—it was and still is an extremely good approximation for most situations. Einstein’s modification is most noticeable in extremely large gravitational fields, such as near black holes. However, general relativity also explains such phenomena as small but long-known deviations of the orbit of the planet Mercury from classical predictions. The Case for Simplicity The development of the universal law of gravitation by Newton played a pivotal role in the history of ideas. While it is beyond the scope of this text to cover that history in any detail, we note some important points. The definition of planet set in 2006 by the International Astronomical Union (IAU) states that in the solar system, a planet is a celestial body that: - is in orbit around the Sun, - has sufficient mass to assume hydrostatic equilibrium and - has cleared the neighborhood around its orbit. A non-satellite body fulfilling only the first two of the above criteria is classified as “dwarf planet.” In 2006, Pluto was demoted to a ‘dwarf planet’ after scientists revised their definition of what constitutes a “true” planet. |Parent||Satellite||Average orbital radius r(km)||Period T(y)||r3 / T2 (km3 / y2)| |Earth||Moon||0.07481| |Sun||Mercury||0.2409| |Venus||0.6150| |Earth||1.000| |Mars||1.881| |Jupiter||11.86| |Saturn||29.46| |Neptune||164.8| |Pluto||248.3| |Jupiter||Io||0.00485 (1.77 d)| |Europa||0.00972 (3.55 d)| |Ganymede||0.0196 (7.16 d)| |Callisto||0.0457 (16.19 d)| The universal law of gravitation is a good example of a physical principle that is very broadly applicable. That single equation for the gravitational force describes all situations in which gravity acts. It gives a cause for a vast number of effects, such as the orbits of the planets and moons in the solar system. It epitomizes the underlying unity and simplicity of physics. Before the discoveries of Kepler, Copernicus, Galileo, Newton, and others, the solar system was thought to revolve around Earth as shown in (Figure)(a). This is called the Ptolemaic view, for the Greek philosopher who lived in the second century AD. This model is characterized by a list of facts for the motions of planets with no cause and effect explanation. There tended to be a different rule for each heavenly body and a general lack of simplicity. (Figure)(b) represents the modern or Copernican model. In this model, a small set of rules and a single underlying force explain not only all motions in the solar system, but all other situations involving gravity. The breadth and simplicity of the laws of physics are compelling. As our knowledge of nature has grown, the basic simplicity of its laws has become ever more evident. Section Summary - Kepler’s laws are stated for a small mass orbiting a larger mass in near-isolation. Kepler’s laws of planetary motion are then as follows: Kepler’s first law The orbit of each planet about the Sun is an ellipse with the Sun at one focus. Kepler’s second law Each planet moves so that an imaginary line drawn from the Sun to the planet sweeps out equal areas in equal times. Kepler’s third law The ratio of the squares of the periods of any two planets about the Sun is equal to the ratio of the cubes of their average distances from the Sun: where is the period (time for one orbit) and is the average radius of the orbit. - The period and radius of a satellite’s orbit about a larger body are related by or Conceptual Questions In what frame(s) of reference are Kepler’s laws valid? Are Kepler’s laws purely descriptive, or do they contain causal information? Problem Exercises A geosynchronous Earth satellite is one that has an orbital period of precisely 1 day. Such orbits are useful for communication and weather observation because the satellite remains above the same point on Earth (provided it orbits in the equatorial plane in the same direction as Earth’s rotation). Calculate the radius of such an orbit based on the data for the moon in (Figure). Calculate the mass of the Sun based on data for Earth’s orbit and compare the value obtained with the Sun’s actual mass. Find the mass of Jupiter based on data for the orbit of one of its moons, and compare your result with its actual mass. Find the ratio of the mass of Jupiter to that of Earth based on data in (Figure). Astronomical observations of our Milky Way galaxy indicate that it has a mass of about solar masses. A star orbiting on the galaxy’s periphery is about light years from its center. (a) What should the orbital period of that star be? (b) If its period is years instead, what is the mass of the galaxy? Such calculations are used to imply the existence of “dark matter” in the universe and have indicated, for example, the existence of very massive black holes at the centers of some galaxies. Integrated Concepts Space debris left from old satellites and their launchers is becoming a hazard to other satellites. (a) Calculate the speed of a satellite in an orbit 900 km above Earth’s surface. (b) Suppose a loose rivet is in an orbit of the same radius that intersects the satellite’s orbit at an angle of relative to Earth. What is the velocity of the rivet relative to the satellite just before striking it? (c) Given the rivet is 3.00 mm in size, how long will its collision with the satellite last? (d) If its mass is 0.500 g, what is the average force it exerts on the satellite? (e) How much energy in joules is generated by the collision? (The satellite’s velocity does not change appreciably, because its mass is much greater than the rivet’s.) a) b) c) d) e) Unreasonable Results (a) Based on Kepler’s laws and information on the orbital characteristics of the Moon, calculate the orbital radius for an Earth satellite having a period of 1.00 h. (b) What is unreasonable about this result? (c) What is unreasonable or inconsistent about the premise of a 1.00 h orbit? a) b) This radius is unreasonable because it is less than the radius of earth. c) The premise of a one-hour orbit is inconsistent with the known radius of the earth. Construct Your Own Problem On February 14, 2000, the NEAR spacecraft was successfully inserted into orbit around Eros, becoming the first artificial satellite of an asteroid. Construct a problem in which you determine the orbital speed for a satellite near Eros. You will need to find the mass of the asteroid and consider such things as a safe distance for the orbit. Although Eros is not spherical, calculate the acceleration due to gravity on its surface at a point an average distance from its center of mass. Your instructor may also wish to have you calculate the escape velocity from this point on Eros.
https://opentextbc.ca/openstaxcollegephysics/chapter/satellites-and-keplers-laws-an-argument-for-simplicity/
The most often described space tethers are the momentum exchange tethers, so called because they allow the transfer of momentum (and thus energy) between two objects. The main benefit of these types of tethers is that they allow changing the orbits of spacecraft without using any rocket propulsion. For example, imagine two equal satellites connected by a long tether and circling Earth, one in a lower orbit than the other. Because the satellites are the same, the center of mass of the combination lies halfway between them. If we want to calculate the orbital speed of this satellite system, we could pretend we are dealing with a single spacecraft located at the combination's center of mass. For the laws of orbital mechanics, it is irrelevant what the satellite system looks like; the whole contraption will stay in orbit as long as its center of mass has the right speed for the altitude at which it is circling. However, because the lower satellite is closer to Earth than is the center of mass, it is actually orbiting too slowly for its altitude. It will therefore try to fall back to Earth. In contrast, the other satellite is moving too fast for the gravity at its orbital altitude, and will try to pull away. Because the spacecraft are kept together by the tether, they continue orbiting at an average speed that is too high for the lower satellite and too slow for the higher one, but just right for the system as a whole—that is, its center of mass. The result is that both satellites are pulling like dogs on a leash, so that the tether remains taut and forces the two to orbit like a single spacecraft system. Moreover, because one satellite is trying to fall back to Earth and the other is trying to pull away, the tether system automatically orientates itself into a stable, vertical position perpendicular to Earth's surface. Since this effect is caused by the difference in gravity at different orbital altitudes, it is called gravity-gradient stabilization. It is as if the lower satellite is being dragged along by the higher one, and in turn the upper satellite is pulled back by the lower satellite. The spacecraft are basically sharing their individual momentum via the tether, hence the term momentum exchange tether. If the tether would go slack, each satellite would be able to move independently and go its own way, one moving down and one up until the tether was stretched tight and vertical again. What happens if we cut the tether? The lower, "too slow'' satellite is now free to follow its own orbit and starts to fall. Because it lacks the energy to stay at its original altitude, it will enter an elliptical orbit with a lower perigee than before. Its original orbital altitude becomes its apogee. If the new orbit intersects Earth's atmosphere, that is, if the perigee is too low, the lower satellite will actually reenter the atmosphere. Momentum exchange tethers can thus be used to return cargo capsules back to Earth or make obsolete satellites leave orbit and burn up in the atmosphere. The upper, "too fast'' satellite, on the other hand, will shoot away to a higher altitude because it has too much energy. Its elliptical orbit will have its perigee at the altitude of the satellites original, forced orbit, but a higher apogee than before. The two spacecraft have been put into different orbits using "tether propulsion'' rather than rocket propulsion (Fig. 1.12). The process is analogous to what happens when an Olympic hammer thrower spins around and then lets go of the heavy weight. The hammer will fly away, while in reaction the athlete is forced to step back (we can think of the hammer as the higher altitude satellite and the thrower as the lower altitude spacecraft). An interesting consequence of the dynamics of momentum exchange tethers is that in this case it is possible to "push" on a cable! On Earth, cables can be used only to pull things with, because they go slack the moment you release the tension on them. However, if we push the lower satellite in our example, for example using a rocket motor, it goes into a higher orbit. As a consequence, the upper satellite is given some leeway and allowed to increase its altitude as well, until blocked once more by the pull of the tether. As a result, the whole combination (i.e., the center of mass) will now enter a higher orbit; effectively we have pushed the satellites up, as if they were connected by a steel rod rather than a thin, flexible tether.
https://www.fossilhunters.xyz/disruptive-technology/momentum-exchange.html
Zeroing in on Locations with GPS Looking to add accurate location mapping to an electronic system? In this series, Jeff introduces GPS technology, presents a circuit that accepts a standard string of serial data, and provides useful insights into topics such as receiver data processing and trilateration. As a youngster traveling in the family automobile, I could almost feel the level of friction between my parents when it came to driving directions obtained from a paper map. We had about a dozen of these accordion folded wonders crammed in the glove compartment. My mom wasn’t much of a map reader, so we would be constantly pulling over so dad could get out, de-origami a map on to the front hood of the car to get his bearings. Fifty (or so) years later, I remember a friend having a TomTom (mobile GPS device) stuck to his dashboard. For years, I was jealous. It wasn’t until Bev and I bought a Prius in 2009 that we experienced the joy of in car navigation. If you’ve ever depended on your automobile’s navigation system to get you somewhere you’ve never been (to) before, you may recall being frustrated by an older set of internal maps, when your destination did not exist. Not only is an update to these systems expensive, but the updates are generally out of date before they can be installed. The navigation app on my phone always downloads the latest maps, but it does require wireless access for its display abilities. Still, our phones are a veritable office on wheels. Or a playground, as we’ve seen lately with Pokémon Go the global sensation, which uses GPS and mapping to capture famous Pokémon characters. It’s no wonder globes are passé when you have access to Google Earth’s satellite mapping and Street View’s 360°. When the Soviet spacecraft Sputnik was launched in 1957, the US began work on the Global Positioning System (GPS) for military and intelligence applications. It is a network of satellites that orbit the Earth and beam down signals to earth. These coordinated signals allow a GPS receiver to triangulate position. Let’s skim off the cream of this satellite system in order to understand its usefulness. SATELLITES In the early 1960s, the Navy used Transit, a group of five satellites, to give the Polaris submarine fleet precise positioning based on Doppler frequency shift. Later that decade, advanced satellites proved that highly accurate clocks could be used to create a passive ranging technique if they were synchronized with one another. High-performance crystal oscillator clocks were soon replaced by cesium 133-based atomic clocks. Cesium’s principal resonance is used to PLL a crystal oscillator to exactly 9,192,631,770 Hz. Before we look at why this is important, let’s continue with the progress of the satellite system as it came to be known. By the late 1970s, the first of the “block I” satellites, Timation, developed by the Naval Research Laboratory, were being placed in orbit for military purposes. A total of 11 were in orbit by the mid-1980s. Their precise altitude is such that they orbit the earth once every 12 hours. Each satellite follows a different predetermined orbit, creating a net or block of coverage (see Figure 1). This strategy allows multiple satellites to be “seen” from any given point on the Earth’s surface at all times. With a projected lifetime of only five years at the time, improvements in lifespan became a critical design element to the next generation of satellites. Navstar was designed to operate with a block of at least 24 satellites in orbit. Lockheed Martin began launching the Navstar satellites in the late 1980s. A total of 21 satellites are active at any one time; the other 3 satellites parked as spares. This block was finished in 1995, but today’s GPS network has around 30 active satellites in all with the last of the “block II” family placed in orbit in January 2016. The next generation of “block III” satellites has already rolled off the drawing boards. This new batch of eight will displace older satellites and will provide more powerful signals in addition to enhanced signal reliability, accuracy, integrity, and a 15-year design lifespan. The first of these GPS “block III” satellites have been launched. SYNCHRONICITY The Global Positioning System Operations Center (GPSOC) at that Schriever Air Force Base in Colorado provides for the GPS constellation 24/7. It monitors all satellites and communicates with each to provide a continuous and accurate information base. There are three kinds of time available from GPS: GPS time, UTC as estimated and produced by the United States Naval Observatory, and the independent times of each free-running GPS satellite’s atomic clock. Let’s review each one. UTC refers to time referenced to the zero or Greenwich meridian. We derive local time using UTC with an offset of the number of time zones (hours) between our locality and Greenwich, England. The current version of UTC is based on International Atomic Time (TAI) and averaging data from some 200 atomic clocks in over 50 national laboratories initialized to UTC on January 1, 1970. This provides accurate ticks; however, since the Earth’s spin is slowing (taking ever so slightly more than 24 hours to complete one rotation) periodically, a leap second is added to the UTC to bring everything back in line. We calculate date and time based on this adjusted reference. However, since TAI is not affected by leap seconds (the artificial adjustment of UTC, due to the Earth’s actual rotation), it now leads UTC by 39 s since its inception! GPS time was initialized to UTC on January 6, 1980. Note: GPS time and TAI are both linked to atomic clocks, so they remain in sync, but because initialization of the two were on different dates, TAI remains 19 s ahead of GPS time. The master control station (MCS) in Colorado provides command and control of the GPS constellation of satellites. They generate and upload navigation messages to ensure the health and accuracy of the system. The MCS receives navigation information from the monitor stations around the globe to compute the precise locations of the GPS satellites in space, and then uploads this data to the satellites. This includes the data needed to keep all satellite timebases synchronized. In the event of a satellite failure, the MCS can reposition satellites to maintain an optimal GPS constellation. Without intervention, the constellation of satellites would begin to drift out of their respective orbits. This would lead to a false sense of position and a loss of synchronization. GPS receivers would calculate positional information based on the assumption of accurate information and the inaccuracies would produce unreliable results. GPS TRANSMISSIONS What began with a single transmission frequency has been upgraded, adding additional transmissions to increase accuracy, while remaining backward compatible with older satellites. Let’s take a look at the original microwave signal L1 at 1575.42 MHz. If you refer to Figure 2, you’ll see that the GPS signal has three parts, the carrier, which is modulated by a combination of the navigation message and the Coarse Acquisition (C/A) code. Satellites are uniquely identified by a serial number called space vehicle number (SVN), a space vehicle identifier (SV ID), and pseudorandom noise number (PRN number). Each satellite uses a unique PRN code, which does not correlate well with any other satellite’s PRN code. This allows the PRN (or C/A) to remain uniquely identifiable to one particular satellite. Think of it this way: each satellite transmits speech in a different language. By tuning you ear for a specific language, you can understand the content from one satellite; otherwise, it’s all noise. The navigation message is sent at a much lower rate requiring about 30 s and carries 1,500 bits of data. This includes the week number, precise “time-of-week,” and a health report for the satellite. This small amount of data is encoded with the C/A sequence that is different for each satellite. Since each GPS receiver knows the PRN codes for each satellite, it can not only distinguish between different satellites but also decode the navigation message. This message also includes orbital information particular to itself, which allows a receiver to calculate the time-of-flight from the position of the satellite. Almanac data contains information and status concerning all the satellites and helps a receiver determine which satellites are in service and the difference between UTC and GPS time. Remember the “leap seconds” issue? RECEIVER DATA PROCESSING For each satellite, a GPS receiver must first acquire the signal and then track it while it remains in sight. Acquisition is most difficult if no previous almanac information is present, as the receiver must search for all PRNs in its library. This may initially require several minutes. Once acquisition of a satellite has been made, the receiver continues to read successive PRN sequences and will encounter a sudden change in the phase of the PRN, the beginning of a navigation message. The navigation message or frame is divided into multiple subframes as seen in Table 1. The almanac has 25 pages of data and requires multiple subframes. GPS transmitted signals are so accurate—thanks to the accuracy of on-board atomic clocks and synchronization of all satellites from the GPSOC—that time-of-flight, and thus distance to the satellite, can be calculated to within a few billionths of a second. And this brings us to trilateration. TRILATERATION GPS trilateration is the process of determining relative and absolute location by measurement of distances, using the geometry of spheres. The sphere or bubble radius about a satellite is the measurement of distance calculated by the GPS receiver using time-of-flight of its transmitted signal. Using a single satellite, this distance indicates the receiver might be any point on the surface of that bubble. When the distances to two known satellites have been calculated, their bubbles intersect creating a circle. The receiver’s position is now narrowed down to all points on that circle. A third satellite’s bubble will intersect that circle at two points. One of those points will be the actual position of the GPS receiver. If we assume the receiver is on Earth, we could tell which of the two positions was in fact on Earth. The use of a fourth satellite can be used to verify which of the two is correct and also affirm the integrity of all the calculations. You can see that the GPS receiver is a highly complex system comprising not only a high-gain receiver, but also a high-precision computational unit. Like many critical technologies, a standards agency is responsible for overseeing its use. Back in 1957, the National Marine Electronics Association (NMEA) was founded by a group of electronics dealers at the New York Boat Show who wished to strengthen relationships with electronic manufacturers. Their work has led to the NMEA 0183 Interface Standard, which defines electrical signal requirements, data transmission protocol and time, and specific sentence formats for serial communications (see Table 2). Each message sentence contains a specific data set and begins with “$” plus five letters, the first two of which indicate the source of the sentence. Every sentence contains only printable ASCII characters except for a trailing <CR><LF>. For instance, when a sentence begins with “$GPGLL,” the sentence has been written by a global positioning receiver and contains geographic position, latitude and longitude data—that is, $GPGLL,4916.45,N,12311.12,W,225444,A. Let’s look at the details. 4916.46,N means latitude 49° 16.45 min North. 12311.12,W represents longitude 123° 11.12 min West. 225444 is the fix taken at 22:54:44 UTC. And lastly, A means data is valid. Any sentence may include a checksum indicated by a “*xx” following the last data byte, where “xx” is the exclusive OR of all characters between, but not including, the “$” and “*”. Each manufacturer can choose what sentences it will send and how often. Many will stream data continuously without user intervention. Some manufacturers accept similarly formed sentences as input. This may be used to customize how the chip behaves (i.e., data rate). For this project, we will be interested in the $GPGSS sentence, because it includes latitude, longitude, and altitude. While I bought a number of GPS modules from various sources (see Photo 1), I chose an Adafruit Industries Ultimate GPS. It can operate from a 3.3-to-5-V supply (5-V tolerant I/O) with an onboard antenna, yet it has an external antenna connector. Using a four-wire interface, it mates up nicely to the RS-232 (TTL)-to-USB dongle I use in many projects. This will let us see the sentence output using a terminal program, RealTerm, on a PC. Note the output in Photo 2. LISTENING TO THE GPS RECEIVER To make the GPS useful, I display the latitude, longitude, and altitude data being sent out by the GPS module. To do this, I used a circuit that I’d standardized on for many projects when user I/O is required (see Figure 3). The GPS data comes in through the J7 serial port at 9,600 bps. If you refer to Photo 2 and locate the $GPGGA sentence, you will see it contains 14 pieces of data. No data between commas indicates the GPS hasn’t collected sufficient data for that entry. For this project, I defined the lengths of the data variables (see Listing 1) and the associated variables (see Listing 2). As a bare minimum, I need one ring buffer for the RX data. This enables the data to flow in unrestrained. The job of the main loop will be to watch for serial input and make a call to process it when available. At this point we are looking for the start of any sentence, the character ‘$’. A bad compare returns to the main loop. A match allows execution to remain in the processing routine and gathers the next characters. If the new sentence exactly matches our string of interest “$GPGGA,”, then we can begin gathering data, as we know the format. If at any time the received data does not match what’s expected, we exit processing to await new data. Listing 1 I defined the lengths of each comma-delimited ASCII string item in the $GPGSS message. Comments show the format expected. LengthTime equ d’6’ ; HHMMSS LengthLatitude equ d’9’ ; xxxx.xxxx LengthLatitudeLabel equ d’1’ ; x LengthLongitude equ d’11’ ; xxxxx.xxxx LengthLongitudeLabel equ d’1’ ; x LengthQuality equ d’1’ ; x LengthSatellites equ d’2’ ; xx LengthHDOL equ d’3’ ; x.x LengthAltitude equ d’5’ ; xxx.x LengthAltitudeLabel equ d’1’ ; x Listing 2 The available GPS variables are declared using the expected lengths defined earlier. Time:LengthTime Latitude:LengthLatitude LatitudeLabel:LengthLatitudeLabel Longitude:LengthLongitude LongitudeLabel:LengthLongitudeLabel Quality:LengthQuality Satellites:LengthSatellites HDOL:LengthHDOL Altitude:LengthAltitude AltitudeLabel:LengthAltitudeLabel Each data value has its own format and ends with a comma (just like the sentence prefix we just located). Knowing this, we can use the comma as a delimiter for the data. Time is the first data value, and it is the UTC time as six ASCII characters of the format HHMMSS. Latitude is a positive number between 0 and 90°, as nine ASCII characters using the format DDMM.SSSS. It can be north or south of the equator. And instead of using the “-” sign for south of the equator, the third data value is an ASCII “N” or “S.” Longitude is a similar positive number between 0 and 180°, as 10 ASCII characters using the format DDDMM.SSSS. It can be east or west of the prime meridian. And instead of using the “-” sign for the west of prime, the fifth data value is an ASCII “E” or “W.” The next value is quality, and it describes not only the validity, but also how the data is produced. Anything other than “0” is valid data by calculation, estimation, simulation, or another method as indicated via the byte value. Following the quality data value is the number of satellites used in the computation of position. From previously it was shown that a minimum of three satellites is necessary to trilaterate position. More satellites provide a higher degree of verification and not necessarily more accuracy. Depending on the relative position of the satellites used, the accuracy of the computation may be affected. The highest accuracy can be attained when the distance measurement of each satellite can be used as a single component of the result. If you think of the receiver as a cube with three separate antennas pointing at three satellites directly over that cubes surface, then the distance to that satellite affects only one dimension of the position. With more satellites to choose from, the possibility of getting closer to perfection goes up. The dilution of position (DOP)—and Horizontal DOP (HDOP) and Vertical DOP (VDOP)—is an indication of how close the satellite positions come to perfection. The higher the number, less accurate the computation. The next data in this sentence is the altitude, which is presented in distance above mean sea level. The data value is altitude in tenths. Following this data value is the unit of measurement, “M’” (meters in this case). The remaining data (and the checksum) was not used in this project. However, I’ll mention the next data because of its potential importance. The World Geodetic System 1984 (WGS84) establishes a point (center of mass of the Earth) from which the surface ellipsoid is measured. All water is then distributed by gravity over this surface creating a new average surface we call mean sea level. The “mean” eliminates the tide and other factors that affect the instantaneous sea level. Even though we’ve seen a rise in sea level of approximately 20 cm over the past 30 years, altitude is still commonly referred to as distance relative to the mean sea level. The next data value in this sentence is the distance of mean sea level above the surface ellipsoid. This is presented, as the altitude, with a value and unit. This was used to determine the mean sea level. DISPLAYING THE RESULTS I use a 4 × 20 display to present the collected data in the following format. Time (GMT) HH:MM:SS Lat xxºxx’.xxxx x Long xxxºxx’.xxxx x Alt xxx.x x Sat xx A number of string constants are defined that label each piece of data. The data received is mostly in the ASCII format needed for the display, with a few exceptions. The time can be delineated using colons between the hours, minutes, and seconds for a more familiar read. I also can add the degree and minute symbols to the latitude and longitude values to make this more recognizable. When the code runs, the immediately available time is presented quite quickly from a single satellite. Position data is garbage until the receiver can calculate position based on acquisition of multiple satellites. Note: This can be blanked based on the quality value. I’ve seen as many as 11 satellites indoors with just the existing antenna Photo 3. In Part 2 I’ll discuss what you can expect for accuracies. I’ll also explain how to go outside and use the setup for some cartography. SOURCE Ultimate GPS Module Adafruit | www.adafruit.com PUBLISHED IN CIRCUIT CELLAR MAGAZINE • JANUARY 2017 #318 – Get a PDF of the issueSponsor this Article Jeff Bachiochi (pronounced BAH-key-AH-key) has been writing for Circuit Cellar since 1988. His background includes product design and manufacturing. You can reach him at: [email protected] or at: www.imaginethatnow.com.
https://circuitcellar.com/research-design-hub/projects/real-world-mapping-part-1/
GPS satellite navigation is now ubiquitous in vehicles or on phones. But how do these devices know where you are? And how do geodetic instruments increase the precision enough to measure tectonic plate velocities of one millimeter per year or less? The Global Positioning System (GPS) was created by the US Department of Defense but ultimately made available for public use. There are other such satellite constellations, including Russia’s GLONASS and the European Union’s Galileo system. The general name for these systems is Global Navigation Satellite System (GNSS), but for simplicity, we will refer to GPS here as each functions similarly. GPS fundamentally consists of three components: satellites, ground stations, and receivers. There are currently 31 GPS satellites that actively orbit the Earth, with at least 24 required for global coverage. The position of each satellite must be precisely known in order for the system to work, so a network of ground stations is used to track their orbits. Finally, the receiver in your device records the signals from each satellite in the sky and uses that information to calculate your position on Earth. Each satellite is equipped with an atomic clock to precisely and accurately measure the time, which is included in the signal it broadcasts to receivers along with the satellite’s current position. Based on the difference between the time that the signal was sent and the time that it was received, you can calculate the distance between the receiver and the satellite. GPS receivers use a process called “trilateration” to compute their location based on the distance from each satellite. If you know your distance from one point in space, you could be located anywhere on a sphere around the point with a radius of that distance. That information alone is not very useful in determining your location, but if you know your distances from two different points, you can draw spheres around each of those points and see where they intersect. Once you do this for three satellites (assuming your receiver has an atomic clock perfectly synced with the satellites), the spheres are only going to intersect the Earth’s surface at one point—your location! With the addition of a fourth satellite, you can find your position without the need for an atomic clock in your receiver. And if you add even more satellites, the error bars on your position will shrink. Basic GPS receivers are usually accurate to within a few meters. That’s perfectly good for most of us when traveling, but geoscientists need location information that is far more precise. For example, the San Andreas Fault moves roughly 30 millimeters per year. Geoscientists and surveyors collect an abundance of data, analyze a different aspect of the GPS signal, and apply corrections to increase precision. (See below for explanations of these techniques.) Why go through all this trouble? Apart from the technologies enabled by high-precision positioning, there are many scientific applications. Data from the Network of the Americas—a network of permanent GPS stations from Alaska to the Caribbean—is used to study everything from earthquakes and volcanoes to weather conditions and sea level rise. By making such precise measurements, we are able to reveal critically important changes on our dynamic planet as they happen! Home - What is geophysics?
https://www.unavco.org/what-is/gps/
GNSS positioning is premised on the idea that the satellite positions are known, or can be calculated. Errors in the computed satellite position will manifest as ranging errors that degrade the positioning accuracy. It is important, therefore, to ensure satellite orbit calculations are as accurate as possible. As discussed in this article, Earth rotation plays a key role in this regard but surprisingly few references on orbit calculation actually mention its affect explicitly or how to compensate for it. Don’t fret, however, the correction is certainly applied or positioning accuracy would be much worse than is currently attained. Reference Frames Earth rotation is important because of the choice of reference system in which orbital calculations are performed. In particular, GNSS orbits — either from the broadcast orbital models or precise post-mission estimation — are parameterized in an Earth-Centered Earth-Fixed (ECEF) coordinate frame such as the WGS84 reference frame used for GPS. A common definition of an ECEF frame is one whose z-axis is the rotational axis of the Earth (pointing north), whose x-axis is in the equatorial plane and includes the median passing through Greenwich, and the y-axis completes the frame (typically in a right-handed sense). By definition,such a frame rotates with the Earth and is thus time-varying in inertial space with a period of 24 hours. In the context of satellite position computations, this means that satellite locations can be computed at any given time, in an ECEF coordinate frame that is valid at that same time. An easy way to visualize this point is to consider an ideal geostationary satellite whose position relative to the Earth does not change over time — orbital parameters or orbital files would always yield the same coordinates for the satellite. Effect of Earth Rotation So where does Earth rotation enter the picture? Well, precisely from the fact that the time at which a satellite transmits a signal, and the time a receiver receives that signal differs. Between the time of transmission (tt) and the time of reception (tr) — roughly 70 milliseconds (give or take few milliseconds) for medium-Earth orbiting (MEO) satellites — the Earth has rotated by ωe . (tr – tt), where ωe is the rotation rate of the Earth. To illustrate the effect of this, we return to our idealized geostationary satellite. We further consider a user located directly below the satellite. Figure 1 shows this situation looking down on the north pole. To simplify later discussions, we consider this figure to apply at the time of signal transmission. Since the orbital radius of a geostationary satellite is known (approximately 42,164 kilometers) and the radius of the Earth is known (approximately 6,371 kilometers) the separation of the user and satellite at any given instant is constant and can be easily computed. Now consider Figure 2, which shows the same figure but also includes the location of the user and satellite at time of signal reception. Because of Earth rotation, the signal travels the path denoted by the blue line, which is obviously longer than the instantaneous separation of the satellite and user. This is the path in inertial space (ignoring the Earth’s orbit around the sun for simplicity). The problem, however, is that because orbits are parameterized in an ECEF frame, the computed position of the satellite will still be directly above the user. This leads to a situation where the true signal path and the computed signal path differ. Unless accounted for, this difference will manifest as a ranging error in the receiver’s position engine, which computes the difference of the measured and predicted signal paths (i.e., ranges). The magnitude of the position error depends on the number and distribution of satellites, as well as user latitude. As an example, in Calgary, Canada, ignoring Earth rotation results in a shift in the estimated user position of about 20 meters, primarily in the east/west direction. Before moving on, although we used the example of a geostationary satellite, the exact same effect applies to non-geostationary orbits as well. The main difference is that the satellite positions in Figures 1 and 2 would not necessarily be directly above the user, and the distance between the user and satellites, projected into the equatorial plane (which is shown in Figures 1 and 2), will vary with time as satellites move along their orbits. The good news is that regardless of the orbit, the method of compensation is the same. Simple Solution To remove the discrepancy between the measured and computed signal paths, we need to compute the ECEF position of the satellite at the time of transition in the ECEF frame at the time of signal reception. Fortunately, this is easily accomplished by realizing that the two coordinate frames are related by a rotation about the z-axis. Mathematically, we can write where is a position vector at the subscripted time (or frame), and R3 (ωe . (tr – tt)) is the rotation matrix about the z-axis by the angle subtended by the Earth rotated during signal propagation. Applying the transformation in (1) yields the position of the yellow satellite in Figure 2, which allows for the proper computation of the (orange) user position. The astute reader might be wondering how the propagation time is computed. This can be found by iterating to a solution: first, assume an initial distance between the user and satellite (e.g., 70 milliseconds); then compute the satellite position using this assumed distance (for Earth rotation compensation); use the approximate user position to re-compute the range to the satellite; and finally use this range to compute the satellite position. The accuracy of the user position in the iteration is not typically a problem. The reason is because, even with a position error of 10 kilometers, the worstcase propagation time error would be 33.3 μs (i.e., 10 km / 3e8 m/s). Multiplying this by Earth rotation rate (~7.3e-5 rad/s) yields an angular error of about 2.4 nanoradians. Even over an orbital radius of 26,000 kilometers (assuming a MEO orbit), the orbital error is less than a decimeter. Then, of course, after the first epoch, the position error is typically several orders of magnitude smaller making the effect of user position error negligible. Summary This article has shown why Earth rotation needs to be accounted for when computing satellite coordinates for GNSS applications. The compensation is simple but crucial steps for obtaining the highest possible positioning accuracies.
https://insidegnss.com/how-does-earths-rotation-affect-gnss-orbit-computations/
Global Positioning Systems, widely known as GPSs, have a great importance since the days of World War II. Although the initial focus was mainly on military targeting, fleet management, and navigation, commercial usage began finding relevance as the advantages of radiolocation were extended to (but not limited to) tracking down stolen vehicles and guiding civilians to the nearest hospital, gas station, hotel, and so on. A GPS system consists of a network of 24 orbiting satellites, called NAVSTAR (Navigation System with Time and Ranging), and placed in space in six different orbital paths with four satellites in each orbital plane and covering the entire earth under their signal beams. The orbital period of these satellites is twelve hours. The satellite signals can be received anywhere and at any time in the world. The spacing of the satellites is arranged such that a minimum of five satellites are in view from every point on the globe. The first GPS satellite was launched in February 1978. Each satellite is expected to last approx 7.5 years, and replacements are constantly being built and launched into orbit. Each satellite is placed at an altitude of about 10,900 nautical miles and weights about 862 kg. The satellites extend to about 5.2m (17ft) in space including the solar panels. Each satellite transmits on three frequencies. The GPS is based on well known concept called the triangulation technique. Consider the GPS receiver MS to be placed on an imaginary sphere. The radius of imaginary sphere is equal to the distance between satellite "A" and the receiver on the ground (with the satellite "A" as the center of the sphere). Now the GPS receiver MS is also a point on another imaginary sphere with a second satellite "B" at its center. We can say that the GPS receiver is somewhere on the circle formed by the intersection of these two spheres. Then, with a measurement of distance from a third satellite "C", the position of the receiver is narrowed down to just two points on the circle, one of which is imaginary and is eliminated from the calculations. As a result, the distance measured from three satellites suffices to determine the position of the GPS receiver on earth. History of the Global Positioning System (GPS) Beneficiaries of GPS Limitations of GPS Factors affecting the accuracy of GPS position are given below:
https://www.javatpoint.com/global-positioning-systems
Why do things orbit? How come they don’t just fall to the out of the sky? Why does the Moon orbit the Earth? Why do satellites orbit around us and not fall down? | | The answer to these questions is that they are falling; they are continuously falling. They are falling around, and around the Earth. They have sufficient tangential speed that, instead of hitting the ground, the force of gravity constantly pulls them around in ‘circular’ paths (actually, it’s elliptical, but more of that later). | | This subtly of this is one of Sir Isaac Newton’s great thought experiments. To the left is a copy of one of his original drawings. Imagine you are standing at the top of a tall cliff holding a cannon ball. If you drop the ball straight down, it’s going to accelerate straight down. However, if instead of dropping it, you fire it horizontally, it’s going to follow a curved path as it gets pulled down by gravity as it travels forward. If you launch it horizontally even faster it’s going to travel further, and the path it follows is called a parabola (well, strictly speaking it’s actually part of a very, very, eccentric ellipse, but more of that later)*. If you could fire the ball fast enough it would still fall, but the Earth is curving away from it, and it continuously falls. The question is not “How high do you need to be to get into orbit?” but “How fast do you need to be going to get into orbit?” “How fast do you need to be going to get into orbit?” | | And the answer is: Pretty Fast! To achieve a low Earth orbit you need to be travelling around 7,800 m/s. To escape the Earth you need to be travelling faster than approximately 11,000 m/s (around 25,000 mph, or Mach 33). The Earth curves away approximately 8 inches for every mile travelled. If you want to learn a little more about this check out the consequences of living on a sphere. | | Another interesting consequence is that, because you have to be going fast enough to get into orbit, if you start closer to the equator, and launch to the East, you get a leg-up from the rotation of the Earth. Launching close to the equator can give you boost of around 1,000 mph because of the spin of the Earth. If you look at a map of launch locations you can see they are banded closer to the equator, and not at the poles! It's not just the leg-up in velocity that is a benefit. If your aim is to place a payload in Geosynchronous Orbit (which, by definition, are equitorial), then by launching from the equator you do not need to perform a plane change maneuver, saving valuable propellant (or allowing a more massive payload for the same rocket). This is so valuable the rockets are even launched from ship-born platforms floating near the equator. | | Strictly speaking, when you throw a ball into the air, you could technically say that the ball is in orbit. If you were to plot its path you’d get a very, very, eccentric ellipse with one of the foci at the center of the Earth (diagram not to scale). As mentioned earlier, with this foci being millions of meters away, the ellipse is essentially a parabola. When you throw a baseball, you’re launching it into an orbit; it’s just that it’s a pretty bad one and intersects with the ground! In the words of Douglas Adams, author of many fantastic books “Flying is simple. You just throw yourself at the ground and miss” Up until the ball hits the ground, it doesn't know that the ground is there. It's attempting to orbit a point mass at the foci of that ellipse! If we were able to replace the Earth with a point mass, a thrown ball would orbit and come back. Cool! (Next time someone makes fun of you for not throwing a ball far enough you can reply that, technically, it was in orbit for a little while!) “Flying is simple. You just throw yourself at the ground and miss” Where did the notion that orbits are ellipses come from? This idea was first proposed in the early 1600s by a mathematician called Johannes Kepler, and he proposed three laws of planetary motion which are named after him. He formulated his laws based on the analysis of very accurate plantary observations taken by his mentor, astronomer, Tycho Brahe. The Law of Ellipses - The path of the planets about the Sun is elliptical in shape, with the center of the Sun being located at one focus. The Law of Equal Areas - An imaginary line drawn from the center of the Sun to the center of the planet will sweep out equal areas in equal intervals of time. The Law of Harmonies - The ratio of the squares of the periods of any two planets is equal to the ratio of the cubes of their average distances from the Sun. (T2/R3 is constant). | | The equal areas are a consequence of the conservation of energy and angular momentum. As the planet swings close to the sun, it speeds up (trading its gravitational potential energy for kinetic energy). As it travels further away, it slows down. The graph below shows the Law of Harmonics for entities in our Solar System. By plotting T2 on one axis, and R3 on the other, the constant ratio can be seen as a straight line. This ratio is agnostic of the mass of the planet. An object in orbit at the same orbital radius will have the same time period of rotation, irrespective of its mass. The further out the orbit, the slower the planet travels. Let's look at this with a simple interactive animation Below is a sun showing what the orbits would be like for a planet in two different orbits. The red reference planet is fixed in orbit, but using the controls it's possible to adjust the orbit of the green planet. Click the top right button to start/stop the animation. The ratio of the orbital radii can be adjusted using the triangular slider (or the buttons at the bottom). The other buttons just configure the display. NOTE - The two planets are not interacting with each other. In this simulation it's just a two body problem allowing you to compare how a single planet would orbit depending on it's distance from the sun. Did you notice that the only way to get the planets to rotate at the same rate is to have them on the same orbital radius? Hold onto that thought, we'll come back to it later. The further away from the center of the sun the longer the time period of orbit. All orbits inside the red reference orbit have a time period less than it. | | Matter is attractive. Stuff is attracted to other stuff. Anything with mass experiences a force towards other mass. This force is called gravity. I would attempt to tell you what causes gravity, but the simple answer is that we just don’t know yet. We can measure the force, however, and there is a formula for it (shown left). The force of attraction between two lumps of mass is directly proportional to the each of their masses, and inversely proportional to the square of the distance between them. The G in the equation is a constant, named after Newton, and it’s value is approximately 6.674×10−11 N⋅m2/kg2 | | That’s a pretty small number and shows you that gravity is a very weak force (it’s by far the weakest of the four know fundamental forces: Gravitational, Electromagnetical, Weak Nuclear, and Strong Nuclear). The inverse square relationship means the pulling force rapidly diminishes with distance. Because everything is pulling on everything else, determining what these interactions will do could gets infinitely complex. However, because the gravity force falls off with the square of the distance, the force of a body a long way away is typically dwarfed by those nearby. Mathematicians and Physicists use this principle to create simplified models. You will hear them talk about them talk about “The two body problem”, and the “Three body problem”. Here what they are doing is ignoring everything but the two or three massively dominant masses in a system (If you are trying to balance a playground see-saw, you are not worried if a fly lands on one side or another). Let's take a look at the Earth-Moon two body system. The Moon revolves around the Earth about once every month. It's orbit is elliptical, but the average radius is about 239,000 miles (385,000 km). The Earth is approx 81 times more massive than the Moon. The force of gravity it imparts on other objects, relative to the Moon is similarly stronger. If we inserted a test mass (a third body), between these two entities it will experience forces from both of them. (Test mass, in this case, means something being so insignificant compared to the other two bodies that it's effect on them is inconsequential). A satellite or spacecraft orbiting the Earth make less difference to orbit of the Moon than a fly landing on the deck of a massive cruise ship does to the trim of the boat. Above, not to scale, is a satellite in geocentric orbit (orbiting around the Earth) somewhere between the Earth and the Moon. It is being pulled towards the Earth by the gravity force from the Earth. It is being pulled towards the Moon by the mass of the Moon. Because the Earth is so much larger, it's pull is larger, but because of inverse square, if the satellite moves further away from the Earth, this falls rapidly. Where does the pull from the Earth equal the pull from the Moon? Where these things are equal, we have something called a Gravity Neutral Point. Above is a plot of the forces that would act on the satellite based on it's position, normalized on the x-axis is the distance between the Earth and the Moon. The y-scale is logarithmic. Where the two curves intersect is the gravity neutral point. As you can see, because the Earth dominates, this is approximately 90% of the way to the Moon. That's pretty close to the Moon; approximately 24,000 miles (from the center of the Moon). This point has special significance because, in theory, an object closer to the Moon than this distance will be pulled towards the Moon, and an object further away from the Moon than this point will be pulled back to Earth. It's the crest of the hill. If you were travelling between the Moon and the Earth and passed this point, even with no propulsion, you'd be captured by the Moons gravity and be slowly pulled in. Or would you? This point is erroneously called a Lagrange Point (more specifically L1, or Langrange point 1) by bad, bad physics text books and blogs. This is incorrect. Some books even go on to say that if you placed a penny at the neutral point it would stay there and not be pulled in either direction; this is also garbage! (for a couple of reasons). The gravity neutral point is NOT a Lagrange Point. Let's help stop the internet propogating this mistruth. Let's see what Lagrange Points really are … | | The problem with the situation described above is that is makes the unrealistic assumption that the objects are not moving. However, we know the Moon is orbiting around the Earth. Remember the animation earlier in the article? We determined that the only way a satellite can orbit with the same period as another is if they were the same distance away from the center. A penny left a this 'gravity neutral point' is closer to the Earth than the Moon, consequently it will be orbting inside (and faster) than the Moon, and zip away. Now, there is a point between the Earth and the Moon where a penny (or satellite) can orbit at the same rate at the Moon (and thus stay lock stepped with it), and it is this what we call a Lagrange Point. | | There is a point, inbetween the Earth and the Moon, where, even though it's at a smaller radius (and so wants to orbit faster), an object placed there is pulled by the Moon's gravity with just the correct force to counteract the difference between the angular accelerations, reducing the resultant force on the satellite and making it orbit with the same time period as the Moon. This point, between the two large bodies, is called L1, or Langrange Point 1 (sometimes a Libration point). We'll see how to calculate this distance later, but the L1 is significnatly closer to the Earth than the gravity neutral point (The L1 point is approx 84% of the way to the Moon c.f. the 90% calculated for the neutral point). Passing the L1 point marks the place where the satellite stops orbiting the Earth and starts orbiting the Moon. But L1 is not the only point where a satellite can orbit at the same period as the Moon. There's more … On the other side of the Moon, is an orbit that normally would have a longer time period than the Moon, but the combined sum of the gravity of the Moon and the Earth pull the satellite in stronger and cause it to orbit with the same time period as the Moon. This point is called L2. We're not finished yet, there's more … There's also a point, on the other side of the Earth, pretty much directly opposite the Moon, where the force from the Earth, plus the very weak force from the Moon combine to put a satellite there in an orbit the same time period as the Moon. This point is called L3. (The radius of this orbit is ever so slightly larger than that of the Moon, because the force acting on it is the combined pull from both the Earth and the Moon). The location of L1 is the solution to the following equation, balancing gravitation and the centripetal force: Here r is the distance from the L1 point to the Moon, R is the distance between the Moon and the Earth, and Me and Mm are the masses of the Earth and Moon, respectively. Rearranging this for r results in a quintic equation that needs solving, but if we apply the knowledge that Me >> Mm, then this greatly simplifies to this approximation: Simarly, for L2, the solution can be calculated from balancing the gravitational and centripetal force on the other side of the Moon: Applying the same simplification as above results in the same answer. With this approximation, the L2 point is the same distance from the rear of the Moon as the L1 point is infront. Finally the location for L3 is the solution to this equation (though here, r is the distance that L3 is from the Earth, not the Moon). Again, applying the simplification allows this approximation to be determined: |Joseph-Louis Lagrange|| | L1, L2, and L3 all lie in a straight line through the center of both the Earth and Moon. The concept of these three points was discovered by Leonard Euler. However, a couple of years later Italian mathematician Giuseppe Ludovico De la Grange Tournier realised there were actually five points, so they are named after him as Lagrange Points. So where are the other two points? L4 and L5 are symmetrically placed either side of the center line and at positions 60° from it at th vertices of equilateral triangles (with the other vertices being the center of the center of the Moon and the barycenter of the Earth-Moon system). | | The barycenter is the common center of mass of the Earth-Moon pair, and it is around this point that the two revolve, like a bolas. However, since the Earth is considerably more massive than the Moon, the barycenter is still located inside the Earth (it's about 1,700 km below the surface). If the Moon and Earth were nearer the same mass, the barycenter about which they'd spin would be outside both of them, and the would spin around it like a giant dumbell. As it happens, in the Earth-Moon system, it's more like the wobble of an Olympic hammer thrower as they spin up. | | The L4 and L5 points lay on an orbit just outside the orbit of the Moon, and the vector mathematics balances out such that the pull from the Moon along that line is equal to the component from the Earth. This vector triangle can be seen in this diagram frim Wikipedia. L4 is traditionally defined as the point leading the orbit, and L5 lagging. | | In the above examples, I used Earth-Moon system to demonstrate Lagrange points, but I could have just as easily used the Sun-Earth system. In this case, the Sun is the center, and the Earth revolves around this. There are corresponding L1-5 points for the Sun-Earth system. Because the Sun is so massive, the L1 point for the Sun-Earth system is just 1.5 million km from the Earth. When dealing with these things, it's helpful to be able to ignore other bodies that have negligable impact on the system. An astronomical body's Hill Sphere is the region that, inside of which, that body's gravitational pull dominates the actions of something inside in. The boundary of a Hill Sphere is a zero velocity surface. Anything inside the Moon's Hill Sphere will tend to become a satellite of the Moon instead of the Earth. The Earth itself has a Hill Sphere, and outside of this sphere, an object passing through will be more dominated by the Sun and if it does not have sufficient energy to pass through, will become a satellite of the Sun instead of the Earth. As you can guess from our discussion, the L1 and L2 points lie on the edge of an entity's Hill Sphere. If a is the average orbital radius then the Hill Sphere size between a large body of mass M, and a smaller body with mass m, looks the same as the formula for the L1 distance. An object passing into the Hill Sphere of the Moon, if it does not have sufficient energy to come out again, will become a satellite of the Moon. Because we know the L1 point for the Sun-Earth system is 1.5 million km, we know that a satellite cannot orbit the Earth further out than that (if it did, it would get more attracted to the Sun and head off in that direction). This means that it's not possible for the Earth to have a satellite orbiting with a time period of greater than approximately seven months (the corresponding time period for an orbit of that radius). (Also as the Moon is very clearly inside the Sun-Earth Hill Sphere, it has no interest in flying off and orbiting the Sun instead of us, and that's a good thing). | | Every object has a Hill Sphere. A spaceman in his suit has a Hill Sphere, an asteroid does, a space station does. Like Matryoshka dolls, it's possible for a space station to have little baby satellites in orbit around it! It does, however, depend on the other dominant masses in the region. For example, when the Space Shuttle was in low earth orbit, it had a Hill Sphere radius of about 120 cm (its mass was about 100 tonnes); since this radius is smaller than the shuttle istelf it's sphere is inside its structure, so it's not going to pick up orbiting debris and have it's own satellites! Now if we had a shrink ray, and could shrink the shuttle down to a point mass, that would be different! (Or pilot the shuttle into inter-stellar space!) In low Earth orbit, anything with a density less than that of solid lead is going to have a Hill Sphere inside of itself (and thus passing objects will be more interested in the Earth than it). As we move further away from the Earth, the influence of Earth's gravity reduces. A geostationary satellite only needs to be more than 6% of the density of water to support satellites of its own. | | If someone were crazy enough to launch a sphere of gold into low earth orbit it would be possible to orbit grains of rice around it. | | Because L1 points lie on zero velocity boundaries of Hill Spheres they are the celestial cross roads between different orbits. A spaceship sitting at L1 can nudge itself in one direction and go inside the Hill Sphere of that object, or nudge itself in another direction and go in the direction of the other sphere. For example, if you want to launch a satellite and put it into orbit around the Sun, you can launch from the Earth and move to the Sun-Earth L1 transfer point, then give it a gentle kick in the right direction and it will change from a geocentric orbit (around the Earth), and into a heliocentric orbit (around the Sun). A fascinating consequence of this is that something coming the other way can also pass through these gateways. As we’ll see later, the L1 points are unstable (with respect to the energy curve), so can be chaotic. Small differences in velocities (caused by other bodies in the region) can cause things to dosy-doe and flip between the spheres on either side of the zero velocity boundary. | | No discussion about this is complete without mentioning J002E3, a piece of space junk identified by amateur astronomer Bill Yeung in 2002. Initially thought to be an asteroid, it has since been identified as the S-IVB third stage of the Apollo 12 Saturn V rocket, launched in 1969. After jettison, it's believed to have gone into a high geocentric orbit for a couple of years, then passed close to the Sun-Earth L1 point and jumped ship to transfer into a heliocentric orbit and spent a few decades on a sojourn orbiting around the Sun, before returning in 2002. It hung around again for a few Earth orbits, then left again back through the L1 point to orbit the Sun again. NASA predicts it will visit us again in 2040. Below is a fascinating animation/simulation they made of its last visit. I don't know about you, but I smiled the entire day when I first heard about this. If you watch the animation you can see how the Moon perturbed the Apollo stage and sent it back towards the L1 point and off back into an orbit around the Sun. See you again in a few million miles J002E3. |Wikipedia|| | The five Lagrange points rotate with the system as it revolves. To the left is an animation also showing the gravity potentials around these locations. Lagrange points L1, L2, and L3 are unstable. Objects placed there will drift, and the more they drift, the stronger the forces will be to move them further away. Satellites placed in these locations need assistance to remain on station. L4 and L5 are stable locations (with the caveat that the ratio of masses of the other two bodies is such that the center mass needs to be at least 24.96x that of the smaller body)*, and objects placed there will tend to remain there. These regions of stability are slightly 'kidney' shaped. | | Because of this stability, space dust, junk and asteroids tend to collect in L4 and L5 points in orbits (they are also great locations to base colonies if you are writing science fiction books). Astronomers have even given these locations names. Asteroids resting in L4 locations are called "Greeks", and those in L5 are called "Trojans" (from camps in the Iliad). To the right is a plot of the potentials. Here's a plot showing a some of the asteroids that Jupiter has managed to capture in it's Langrange points. L4 and L5 are like celestial parking spaces. Points L4 and L5, act like kidney shaped indentations or bowls, and if you carefully roll balls into them, the balls stay inside, riding up and down the sides if perturbed slightly. Points L1, L2, and L3 are like trying to place balls onto hill tops; without constant attention, they will roll off and down the hill in one direction after the slightest nudge. There are lots of great reasons to place satellites in Lagrange Points (even if they require gentle on station adjustments). A satellite at Earth-Moon L2 would allow for communication with the far side of the Moon. A satellite at the Sun-Earth L1 would allow constant observation of the Sun and the Earth; there are already climate observing satellites there. The proposed James Webb Space Telescope (the successor to the Hubble), will sit at the Sun-Earth L2 to reduce light noise. It's been suggested that Earth-Moon L4 and L5 might make good locations for man-made space colonies, but unfortunately these regions live outside the protection of the Earth's magnetosphere and so get bombarded with cosmic rays. | | Science Fiction authors (and some UFO conspiracy people) like to talk about the concept of a Dark Twin Earth, or a mysterious Planet X that orbits around the Sun (in what would be Sun-Earth L3 point). They argue that, because it is always 180° behind us, it is always behind the Sun, and that is the reason we've never seen it. This is bogus for a couple of reasons. The first is that we've actually sent things over to the L3, and they've not seen anything. But more fundementally, L3 is unstable, so it's impossible for a planet there to remain hidden. Even if it started out with the same orbital period and directly behind the Sun, it would not stay there. It would eventually drift out from behind the Sun. It can't hide forever. You can't argue with science! You can find a complete list of all the articles here. Click here to receive email alerts on new articles.
https://datagenetics.com/blog/august32016/index.html
Although it's slightly flattened at the poles, the Earth is basically a sphere, and on a spherical surface, you can express the distance between two points in terms of both an angle and a linear distance. The conversion is possible because, on a sphere with a radius "r," a line drawn from the center of the sphere to the circumference sweeps an arc length "L" equal to (2πr)A/360 on the circumference when the line moves through "A" number of degrees. Since the radius of the Earth is a known quantity – 6,371 kilometers according to NASA – you can convert directly from L to A and vice versa. How Far Is One Degree? Converting NASA's measurement of the Earth's radius into meters and substituting it in the formula for arc length, we find that each degree the radius line of the Earth sweeps out corresponds to 111,139 meters. If the line sweeps out an angle of 360 degrees, it covers a distance of 40,010, 040 meters. This is a little less than the actual equatorial circumference of the planet, which is 40,030,200 meters. The discrepancy is due to the fact that the Earth bulges at the equator. Each point on the Earth is defined by unique longitude and latitude measurements, which are expressed as angles. Longitude is the angle between that point and the equator, while latitude is the angle between that point and a line that runs pole-to-pole through Greenwich, England. If you know the longitudes and latitudes of two points, you can use this information to calculate the distance between them. The calculation is a multistep one, and because it's based on linear geometry – and the Earth is curved – it's approximate. Subtract the smaller latitude from the larger one for places that are both located in the Northern Hemisphere or both in the Southern Hemisphere. Add the latitudes if the places are in different hemispheres. Subtract the smaller longitude from the larger one for places that are both in the Eastern or both in the Western Hemisphere. Add the longitudes if the places are in different hemispheres. Multiply the degrees of separation of longitude and latitude by 111,139 to get the corresponding linear distances in meters. Deziel, Chris. "How to Convert Distances From Degrees to Meters." Sciencing, https://sciencing.com/convert-distances-degrees-meters-7858322.html. 20 April 2018.
https://sciencing.com/convert-distances-degrees-meters-7858322.html
The constant of gravity, or gravity constant, has two meanings: the constant in Newton’s universal law of gravitation (so is commonly called the gravitational constant, it also occurs in Einstein’s general theory of relativity); and the acceleration due to gravity at the Earth’s surface. The symbol for the first is G (big G), and the second g (little g). Newton’s universal law of gravitation in words is something like “the gravitational force between two objects is proportional to the mass of each and inversely proportional to the square of the distance between them“. Or something like F (the gravitational force between two objects) is m1 (the mass of one of the objects) times m2 (the mass of one of the other object) divided by r2 (the square of the distance between them). The “is proportional to” means all you need to make an equation is a constant … which is G. The equation for little g is simpler; from Newton we have F = ma (a force F acting on a mass m produces an acceleration a), so the force F on a mass m at the surface of the Earth, due to the gravitational attraction between the m and the Earth is F = mg. Little g has been known from at least the time of Galileo, and is approximately 9.8 m/s2 – meters per second squared – it varies somewhat, depending on how high you are (altitude) and where on Earth you are (principally latitude). Obviously, big G and little g are closely related; the force on a mass m at the surface of the Earth is both mg and GmM/r2, where M is the mass of the Earth and r is its radius (in Newton’s law of universal gravitation, the distance is measured between the centers of mass of each object) … so g is just GM/r2. The radius of the Earth has been known for a very long time – the ancient Greeks had worked it out (albeit not very accurately!) – but the mass of the Earth was essentially unknown until Newton described gravity … and even afterwards too, because neither G nor M could be estimated independently! And that didn’t change until well after Newton’s death (in 1727), when Cavendish ‘weighed the Earth’ using a torsion balance and two pairs of lead spheres, in 1798. Big G is extremely hard to measure accurately (to 1 part in a thousand, say); today’s best estimate is 6.674 28 (+/- 0.000 67) x 10-11 m3 kg-1 s -2. The Constant Pull of Gravity: How Does It Work? is a good NASA webpage for students, on gravity; and the ESA’s GOCE mission webpage describes how satellites are being used to measure variations in little g (GOCE stands for Gravity field and steady-state Ocean Circulation Explorer). The Pioneer Anomaly: A Deviation from Einstein’s Gravity? is a Universe Today story related to big G, as is Is the Kuiper Belt Slowing the Pioneer Spacecraft?; GOCE Satellite Begins Mapping Earth’s Gravity in Lower Orbit Than Expected is one about little g. No surprise that the Astronomy Cast episode Gravity covers both big G and little g!
https://www.universetoday.com/tag/gravitational-constant/
The Earth's surface is 510,000,000 km2. Calculates the radius, equator length, and volume of the Earth, assuming the Earth has the shape of a sphere. Correct answer: We will be pleased if You send us any improvements to this math problem. Thank you! Thank you for submitting an example text correction or rephasing. We will review the example in a short time and work on the publish it. Tips to related online calculators Tip: Our volume units converter will help you with the conversion of volume units. You need to know the following knowledge to solve this word math problem: Related math problems and questions: - Oceans The Earth's surface is approximately 510,000,000 km2 and is 7/10 covered by oceans. Of which 1/2 covers the Pacific Ocean, the Atlantic Ocean 1/4, the Indian Ocean 1/5 and the Arctic Ocean 1/20. What parts of the Earth's surface cover each ocean? - Volume of sphere How many times does the volume of a sphere increase if its radius increases 2 ×? - Sphere VS Find the surface and volume of a sphere that has a radius of 2 dm. - Earth rotation How fast is the place on the Earth's equator moving if the Earth's radius is 6378 km? - The cube The cube has a surface area of 216 dm2. Calculate: a) the content of one wall, b) edge length, c) cube volume. - Earth parallel Earth's radius is 6370 km long. Calculate the length parallel of latitude 50°. - Tereza The cube has an area of base 256 mm2. Calculate the edge length, volume, and area of its surface. - Earth's diameter The Earth's diameter on the equator is approximately 12750 km. How long does the Gripen flyover the Earth above the equator at 10 km if it is at an average speed of 1500 km / h? - Radius Determine the radius of the circle, if its perimeter and area is the same number. - Area to perimeter Calculate circle circumference if its area is 254.34cm2 - The equator Calculate the length of the equator. Calculate a radius of 6378 km. Round to the nearest thousand. - Procedure and calculation - The ball The ball has a radius of 2m. What percentage of the surface and volume is another sphere whose radius is 20% larger? - Perimeter of the circle Calculate the perimeter of the circle in dm, whose radius equals the side of the square containing 0.49 dm2? - The spacecraft The spacecraft spotted a radar device at altitude angle alpha = 34 degrees 37 minutes and had a distance of u = 615km from Earth's observation point. Calculate the distance d of the spacecraft from Earth at the moment of observation. Earth is considered a - Twice of radius How many times does the surface of a sphere decrease if we reduce its radius twice? - Equator Suppose that tourist went on foot over the Globe equator. How many meters more track made his hat on his head as the shoes on your feet? The radius of the earth is 6378 km, and the height of the tourist is 1.7 m. - Area to volume If the surface area of a cube is 486, find its volume.
https://www.hackmath.net/en/math-problem/21763
Well it seems that Earth has survived yet another close shave with an asteroid. This time around, the object in question was a celestial body known as DA14, a rock measuring 45 meters (150 feet) in diameter and weighing in at 130,000 metric tons in mass. Discovered last year by astronomers working out of the La Sagra Sky Survey at the Astronomical Observatory of Mallorca, this asteroid performed the closest fly-by of Earth ever observed by astronomers. Basically, the asteroids passage took it within Earth’s geosynchronous satellite ring, at a paltry distance of 27,000 kilometers (17,000 miles). That may sound like its still pretty far away, but to give you a sense of scale, consider that Earth’s geosynchronous satellite ring, which the asteroid passed within, is located about 35,800 km above the equator. So basically, this asteroid was closer to you than the satellite that feeds your TV set. Scared yet? Naturally, NASA was quick to let people know that DA14’s trajectory and orbit about the Sun would bring it no closer to the Earth’s surface than 3.2 Earth radii on February 15, 2013. In a statement released in advance of the asteroid’s passage, they claimed: “There is very little chance that asteroid 2012 DA14 will impact a satellite or spacecraft. Because the asteroid is approaching from below Earth, it will pass between the outer constellation of satellites located in geosynchronous orbit (22,245 miles/35,800 kilometers) and the large concentration of satellites orbiting much closer to Earth. (The International Space Station, for example, orbits at the close-in altitude of 240 miles/386 kilometers.). There are almost no satellites orbiting at the distance at which the asteroid will pass.” However, they were sure to warn satellite operators about the passing, providing them with detailed information about the flyby so they could perform whatever corrections they needed to to protect their orbital property. All in all, we should be counting our lucky stars, given the asteroid’s mass and size. Were it to have landed on Earth, it would have been an extinction-level-event the likes of which has not been seen since the age of the dinosaurs. In related news, NASA was quick to dispel notions that this asteroid was in any way related the recent arrival of the meteor above the Urals in Russia. In a statement issued earlier today, they said the following:
https://storiesbywilliams.com/2013/02/15/asteroid-misses-earth-again/
A bobsled at 158 km/hr turns around a curve with a 46.69 m radius of curvature on a banked ice track. What is the banking angle if banking alone (friction neglected) is to hold the sled on its track (see sheet 19,20)? Indicate with a negative (positive) sign whether the banking angle increases (decreases) with increasing speed. A planet has a radius of 6.491 x106 m and a mass of 6.82 x1024 kg. What is the gravitational acceleration close to its surface (see sheet 21,22,25)? Indicate with a positive (negative) sign whether your calculated value depends (does not depend) on the chemical composition of the planet. A rocket passes the earth at a large distance, of the order of earth radii. The gravitational acceleration due to the earth at this distance is 0.415 g ( g is the gravitational acceleration close to the surface of the earth; use 6.0x1024kg for the earth mass.) How far is the rocket from the center of the earth (see sheet27,27')? Indicate with a negative (positive) sign whether the gravitational acceleration varies strongly (hardly at all) with distance changes from the surface of the earth which are of the order of a few kilometers. A satellite circles the earth at a height of 561.5 km above the surface of the earth. How many hours does it take the satellite to circle the earth (see sheet 31) (use 6.0x1024 kg and 6.4x106 m for the earth mass and radius)? Anyone have any idea on questions 2 and 3 have been working on them for a while and I think they are still due tomorrow. I cant figure it out please help? First convert the velocity they give to m/s. The mass cancels out since it appears in both sides of the equation, so it's just v^2/r = gcosθ. We know all of the variables in here now so just plug and solve for θ. A planet has a radius of 6.491 x10^6 m and a mass of 6.82 x10^24 kg. What is the gravitational acceleration close to its surface (see sheet 21,22,25)? Indicate with a positive (negative) sign whether your calculated value depends (does not depend) on the chemical composition of the planet. To solve this problem, we use the gravitational acceleration equation, g= G*M/(d^2), where G is the universal gravitational constant, 6.7x10^(-11), M is the mass of the planet (the object that an object gravitates to), and d is the distance from the center of the object, usually the radius. In this problem, we use the same equation, but instead of solving for g, which we have, we solve for d. A satellite circles the earth at a height of 561.5 km above the surface of the earth. How many hours does it take the satellite to circle the earth (see sheet 31) (use 6.0x10^24 kg and 6.4x10^6 m for the earth mass and radius)? Anyone have a clue as to how to go about this? Thanks so much. Watch your inputs in the calculator. I can only guess that it is somewhere around entering 6.4x1024 (as an example only) that is messing up the calculations. If you are using a TI-83 calculator I suggest using parenthesis and breaking down each part of the equation. I also suggest using the symbol "e" by hitting "2nd" and then the comma button ",". Above it is is labeled "EE". You will see something like 6.4E24. This is the same as saying "x10"
http://sbphysics121.forumotion.net/t25-ch-5-3-help
On 3/31/2021 this topic was updated from its initial posting on 3/29/2021 to clarify, add flights for Jeff and Willy, plus update the KMZs for Carter and Marty to add time data and attempt to standardize the format somewhat. Data Group Web Animation with flight dates changed to sync in Ayvri https://ayvri.com/scene/g0jg73pnjo/ckmxys04t0001256li6lgoi2u KMZs Attached for reference: Carter Crowe 3/28/2021 flight to Topa Bluffs and Back - FAI Rules / 32.5 mile legs x2 = 65.0 miles over 5.94 Hr ~ 10.9 mph SB Rules / 32.7 mile legs x2 = 65.4 miles over 6.73 Hr ~ 9.7 mph - FAI Rules / 31.4 mile legs x2 = 62.8 miles over 4.77 Hr ~ 13.2 mph SB Rules / 32.5 mile legs x2 = 65 miles over 5.34 Hr ~ 12.2 mph - FAI Rules / 30.8 mile legs x2 = 61.6 miles over 4.94 Hr ~ 12.5 mph SB Rules / 31.7 mile legs x 2 = 63.4 miles over 5.13 Hr ~ 12.1 mph - FAI Rules / 29.5 mile legs x2 = 59 miles over 4.25 Hr ~ 13.9 mph SB Rules / 30.4 mile legs x2 = 60.8 miles over 4.84 Hr ~ 12.6 mph - FAI Rules / 29.4 miles legs x2 = 58.8 miles over 4.29 Hr ~ 13.7 mph SB Rules / no score advantage using SB rules Note that parallax is an issue when measuring in Google Earth. One method of avoiding parallax measurement errors is to flatten the terrain (uncheck the terrain box located at the bottom of the layers tree) and reference placemark altitudes to ground rather than absolute altitude (in various item properties). Discussion I’ve done some research on Out and Back records. I was previously under what I think was an incorrect perception that XContest had a category for Out and Back. XContest does calculate more points in their scoring method for various triangles, but I don’t see a category for Out and Back? The XContest triangle scoring is a bit complicated and appears to offer different bonus multipliers for various degrees of alignment with the FAI triangle requirements. One of the philosophical concepts of the XContest rules is that you don’t need to declare your intent prior to the flight. You can do the flight and their algorithm will calculate a score based on your GPS track file. This simplifies the admin overhead and the documentation simplicity increases participation. The international sanctioning organization for various sport aviation records is the FAI / Fédération Aéronautique Internationale https://www.fai.org Various FAI “Commissions” sanction competitions (HGs and PGs are part of the CIVL Commission) and they also sanction “Records and Badges”. For an “official” record to be accepted by the FAI, you need to complete at least some pre-flight requirements like obtaining a Sporting License. Hang Gliding and Paragliding Record requirements are specified in the FAI Sporting Code Section 7D - Class O Records and Badges 2020 Edition Effective 1st May 2020 Note that Paragliders are listed as Class 3 Hang Gliders Flights can be either “Declared” or “Free”. In the “Declared” category, location points are declared prior to the flight, but in the “Free” category location points may be declared post flight. There is a distinction between “Turnpoints” [section 1.5.8] and “Checkpoints” (1.5.11). where Turnpoints are declared prior to the start and Checkpoints are optional and can be identified post (after) the flight competition. For simplicity, when referring to a Free Checkpoint, I will use the term Turnpoint. Section 1.5.5.8 list the record categories for “Free” flights including “Free Out and Return Distance”. Section 1.5.5.8 and 3.2.1 also list other “Free” categories like flights around multiple checkpoints (up to 3 turnpoints in addition to the start and end points) and triangles. You can apply for any of the records you choose, but in the discussion below I will focus on the Free Out and Return Distance Record. Various points (start points, check points etc.) are coordinates surrounded by a cylinder.400 meters in radius, or 800 meters in diameter. There is discussion and illustration in the documentation about measuring to the edge of the cylinder, not the center (5.2.5) For “Free” or undeclared Out and Return, we can declare the “points” after the flight, so for the downrange turnpoint we can simply use the actual turnpoint cordinates and ignore the cylinder because we could adjust the cylinder after the fact to match the same scoring result as if we simply use the turnpoint. For the Start/Finish cylinder, we do need to measure to the edge of the cylinder so there may be some geometric penalty that increases as the distance between the actual start and end points increases. I think we also want to consider the arbitrary FAI cylinder radius of 400 meters and our local objectives. Locally, we are trying to get out and back, so we are striving to define what “back” means. Locally, that often means getting back to Parma or East Beach? We could follow the FAI 400-meter radius requirements, but that complicates the flight. As start/finish cylinder size increase the geometry will also result in an increasing penalty. I would argue that a 400-meter radius doesn’t reflect what we often do in SB, which often entails getting on course up higher and returning lower? We could permit a cylinder of a larger size (up to 4 KM radius ~ 2.5 miles), but as your cylinder gets bigger you will also have a larger penalty because geometrically we are measuring from the edge of the cylinder and not the center. I suggest limiting the max cylinder radius to 4000 Meters because that is half the distance from the ridgeline to East Beach. We need to have some limit on the cylinder size or an open distance flight could be scored as an out and return at half the leg distance x2 legs so the total would equal the straight line open distance. Out and Return flights can be more difficult than open distance (but not always) because open distance can sometimes have a tailwind advantage, however, if a course is obstacle limited (like east wind in the Santa Clara River?), then an out and return might have a local advantage over open distance and be able to score more miles. One might argue that if you get back over Casitas Pass and out to Bates then you did an out and return, but the geometric penalty would subtract the diameter of a 10 mile radius circle from the total score so it wouldn’t be competitive anyway. We could use a max cylinder size that connects the Painted Cave Windmill to East Beach, but that would require about an 8 KM cylinder and would only yield a scoring advantage of less than 400 meters compared to using a smaller 4K meter cylinder limit. 4K is also intuitively comparable to 400 so it seems like a natural choice for Santa Barbara’s scenario that will yield a penalty of up to about 2.5 miles (x2) compared to flights that terminate on point. To permit larger cylinders (larger than 400 meters) up to 4 KM we should add one minor requirement for the cylinders greater than 400 meters. The center of the cylinder must bisect a line (the diameter line of the cylinder) that touches both the outbound and inbound ground tracks, otherwise everyone would use the max cylinder size and place it back to minimize the geometric penalty. If we go the trouble of scoring a flight to the edge of a start/finish cylinder, it’s not that much more effort to score it both ways yielding one score based on the FAI rules requiring a 400 meter start cylinder and 2nd local score using a larger cylinder size (up to 4 KM radius) that will maximize the score. The FAI measures accuracy to 1/100 but I think that degree of accuracy is cumbersome, I recommend we round our measurements to a tenth of a mile. An exception to this would be in drawing the cylinders where we strive to be within about a meter or 2 of the actual cylinder size to achieve an accuracy within a few meters. For cylinder size it is easier to work in meters than miles for resolution accuracy. In calculating a score there might be some trial and error in determining the actual cylinder placement to achieve the maximum score. The FAI documentation also addresses subtraction for Altitude Loss, but the calculations yield results that are not relevant to our local Out and Return scenario as per 3.4.3 The FAI also requires that new records must break the old record by 1 KM (3.2.4.1). Since we mostly work in miles locally, I think we should require 1 mile rather than 1 KM? I suggest that a pilot can claim they “tied” the old record, but to claim a new outright record a pilot should exceed the old record by something like 1 mile, or if they exceed the prior distance by less than one mile than perhaps they could claim an outright record if their new flight was faster than the prior record flight by at least 1 mph average speed? The FAI requires an applicant to submit various documentation items to support their record claim. I propose that a pilot claiming a local out and back record post a reasonably descriptive narrative of their flight. To have a record we need to actually create a record we can reference. We all know that Scotty flew from SB to somewhere in Ojai and back, but we don’t have the details archived for comparison. Proposal: I propose that we (the SBSA) adopt the FAI methodology for calculating “Free” Out and Return flight distances without the requirement for a Sporting License and permit a larger size start/finish cylinder up to 4 KM in radius. For clarification, I am proposing that: Anyone can claim a Santa Barbara Free Out and Return Distance Record without prior declaration or organization membership requirements. The Start/Finish cylinder must be located somewhere along the front range between Gaviola and White Ledge Peak. (other sites like Ojai, Pine, or Fillmore could have their own local records). For the downrange turnpoint, use the actual turnpoint coordinates as the sole turnpoint without regard to a cylinder. For the FAI compatible score, we use the FAI “rules” which permit adjusting a 400-meter Start/Finish cylinder to fit the track and then measure from the edge of the cylinder to the turnpoint along a line that is colinear with a course line drawn from the turnpoint to the cylinder center. For the local Santa Barbara “Rules”, we permit a Start/Finish cylinder of up to 4 KM in radius, but the center of the cylinder must bisect a line (the diameter line of the cylinder) that touches both the outbound and inbound ground tracks. Altitude loss allowed is not relevant to our local scenario. We round the leg distances to the nearest 1/10th of a mile and add the leg distances to get the final score. A new record should exceed the prior record by at least 1 mile or exceed the prior record by less than a mile and record an average speed that is at least 1 mph faster than the prior record speed. The pilot claiming a local record must post a reasonably descriptive narrative of their flight and their IGC file. Additional Discussion I don’t have any authority to make “rules”. I am not suggesting that we can’t individually make up our own objectives, but the closed course flight scoring is more complicated compared to simpler open course scoring. I think there is merit to Carter rules that indicate you need to land within “x” distance of launch? Also, the FAI has categories for other flights like distance around multiple turnpoints and closed triangles. Some pilots think we should simplify by some criteria like if you back to Parma it counts, but there are countless variations that might be left out so some method of assigning a number (mile) score to flights offers more inclusive objectivity? Personally, I rarely land a Parma and prefer to land near the bus stops along the coast. Google Earth’s drawing and measurement tools are somewhat limited. There are likely other programs out there that might yield faster and more accurate results, but the Desktop version of Google Earth is free and utilized my most pilots. ________________________________ Google Earth Measurement Method For my own reference the methods I use to create the geometry in Google Earth are: When creating geometry like circles I untick the “terrain” checkbox in the layer tree to flatten the terrain, otherwise the circles and ground track lines are distorted. For points, I use altitude references to both ground and absolute altitude depending on the use. Sometimes I need both. For differentiation, I use white color for measurement points referenced to absolute altitude, and green color for measurement points referenced to ground. For the Out and Back downrange turnpoint I make 2 placemarks. One with absolute altitude and one referenced to ground. For lines, I use the measure tool. You could also use the “path tool”, but using the line option in the measure tool also creates a path with the added ability to see the distance as you drag or click on a previously created point to zoom to it. The FAI 400 Meter radius circle is a bit different than the larger size cylinder because the outgoing and incoming tracks don’t need to intersect the circle diameter when adhering to the 400 meter cylinder, so I’ll simply do some trial and error drawing out 400-meter circles to fit the tracks in a way that maximizes the leg lengths. I first put down a placemark to mark the center for reference. Once the circles are created, I’ll draw a course line from the turnpoint to the center of the circle (or circles). Save this path but will delete later. Then place a reference point where line intersects the circle circumference, then create a path from that point to the circle center (color it white) and another line from the circumference point to the turnpoint, then delete the course line to leave the 2 colinear lines. The total out and back flight distance is the leg distance x2. Diameter Cylinders: I’ll create a 2-point path (straight line) with the measure tool such that the two end points touch the ground tracks. Save the path. Then calculate half the length and create a circle with the measure tool using one end of the path as the circle center and drag out the circle radius to equal ½ of the diameter length of the previously created straight-line path. You can zoom in for resolution accuracy. You need to save the circle but will delete it later, so accept the default name. The newly created circle will bisect the previously created line segment (path) in half. The intersection point is then used as the center of the cylinder circle of the same radius. I’ll then delete the 1st construction circle but first I put down a placemark at the 2nd circle center so I can reference the center.
http://scpa.info/bb/forum/viewtopic.php?f=3&p=11082
Tampa, FL—Driver error is a leading cause of motor vehicle crashes in the U.S. Many drivers make mistakes, fail to use caution, or engage in negligent or reckless behavior, all of which cause car accidents to occur. Some examples of driver negligence that have been linked to many accidents according to the Insurance Information Institute (III) include: - Driving too fast for roadway conditions or beyond the post speed limit. Speeding is a serious concern as it is “involved in approximately one-third of all motor vehicle fatalities.” Speeding reduces stopping distance and makes it difficult for a driver to react when traffic has slowed, or they have to make an abrupt maneuver. - Driving while impaired. Alcohol, medication, and illegal substances are other factors that contribute to motor vehicle accidents. It is estimated that each day in the U.S., approximately 29 people suffer fatal injuries in motor vehicle accidents that involve an impaired driver.1 If an individual was involved in an accident with an impaired driver, a Tampa, FL accident attorney can help them take the necessary steps to get the driver recognized for their reckless behavior. - Failure to maintain the proper lane. Drivers who neglect to maintain their lane risk side-swiping other vehicles or cutting in front of them. Drivers often fail to maintain their lane when they glance down at their phone, fall asleep at the wheel, or engage in other distracting behavior that causes their attention to deter from the road. - Fail to yield the right of way. When drivers approach an intersection with a four-way stop, they need to be sure they have the right-of-way before they proceed forward. The same applies when a driver is wanting to make a left or right turn. Unfortunately, not all drivers take into account their state’s right-of-way laws and perform the maneuver when it isn’t their turn to do so. Sadly, this leads to serious accidents, often involving pedestrians, bicyclists, and other vehicles. - Distractions. Although cell phones are the top contributing factor in distracted driving accidents, there are other types of distracting behaviors drivers display that lead to severe and fatal accidents. Steps to Take After Engaging in an Accident With a Negligent Driver in Tampa, FL When a driver is responsible for causing an accident, they should also be held financially responsible for paying the victim for their injuries and losses. If a motorist is looking to recover compensation from a negligent driver’s insurer or even the driver themselves to help pay for the accident-related expenses they incurred, they can contact The Reyes Firm for help. The Reyes Firm can be reached at (833) 422-3329 and offers free consultations to those who are looking to gain a better understanding of their legal rights and why they might need to retain a Tampa, FL accident lawyer. The Reyes Firm can be reached at: 3302 North Tampa Street Tampa, Florida 33603 Phone: (833) 422-3329 Website: www.thereyesfirm.com Source:
https://accident.usattorneys.com/what-are-some-examples-of-driver-negligence-that-contribute-to-car-accidents/
The University of Alabama Birmingham conducted research over the course of a year about the most dangerous driving risk factors. Among the main risk factors is the most common distracted driving example, texting and driving, and a less talked about distraction, driving with children. The study also cited aggressive driving and drowsy driving as contributors to accidents. A main takeaway of the study is that it is important for drivers to know that although distracted driving is a main cause of accidents, there are various other habits to avoid that put yourself and others on the road at risk. Drowsy Driving Impairs Functioning, Similar to Drunk Driving You may think that not getting enough sleep is no big deal, but lack of sleep severely affects your ability to drive because this impairs cognitive function. Lack of sleep causes you to be less alert, and ultimately, less aware of your surroundings. There is a direct correlation between lack of sleep and accidents as according to the study, those who sleep only four to five hours a night are 5.4 times more likely to be involved in an accident. That’s a significant increase that you don’t want to risk. If you are drowsy, accidents have a higher chance of occurring because the effects of drowsy driving are similar to those of drunk driving. Therefore, if you are driving and find yourself blinking frequently, notice your reaction time is delayed or that you are falling asleep, you must act to keep yourself and others on the road safe. There are a few tips to consider if you notice yourself becoming drowsy while driving. However, the safest bet may be to have another passenger drive or pull over safely to contact someone to pick you up. If You Notice You Are Falling Asleep Behind the Wheel: - Open a Window - Put on Music - Speak with Passengers - Drink Caffeine - Pull Over Safely and Take a Nap However, the most important prevention is ensuring you get enough shut-eye each night to be more refreshed and alert. In addition to a myriad of health and wellness benefits, add being a safer driver on the list of reasons to aim for eight solid hours of sleep a night. It’s important to understand the risks of drowsy driving, especially as daylight savings time will end on November 4, 2018. Therefore, sunrise and sunset will occur about an hour earlier than before. This change can be difficult for people to become used to. This can be especially difficult for drivers to get accustomed to, especially as driving in the late afternoon will be significantly darker. Speeding & Aggressive Driving Can Cause Accidents You may have an image in mind when you think of an aggressive driver: an angry person, constantly honking, screaming at another car and trying to run it off the road. Although this is a correct representation of an aggressive driver, actions such as speeding and running red lights are considered to be aggressive as well. An overwhelming amount of drivers at 80% have reported that they demonstrated aggressive behavior while driving at least once in the past year. Speeding is a common aggressive driving behavior that according to the data from the study, many drivers are guilty of doing. But speeding isn’t just a harmless way to get to your destination faster. With increasing your speed, you have less control of your car, which can lead to accidents. Unfortunately, these speeding-related accidents can be fatal. In fact, Alabama ranks third in the nation for traffic fatalities caused by speeding. Driving can be extremely stressful, but that’s why it’s important to own up to your aggressive behaviors and work on breaking the habit to be a more responsible driver. How to Avoid Aggressive Driving: - Leave for Your Destination Early - Take Deep Breaths - Abide by the Speed Limit - Don’t Engage in Aggressive Behavior with Other Drivers Hopefully, drivers will utilize these important tips to not only avoid distracted driving, but drowsy and aggressive driving as well, which will ultimately keep yourself and others safe on the roads. We’re Here to Help If you or a loved one are seriously injured in a car accident that was not your fault, call the experienced Alabama Car Accident Attorneys at Floyd Hunter at 334-452-4000. When you call, you’ll receive a FREE case evaluation. There is never an attorney’s fee due upfront, and we don’t get paid until you do. It’s that simple. We will fight to protect your rights to the fair insurance settlement that you deserve when injured in a motor vehicle accident. Call Floyd Hunter Injury Law for a Free Legal Consultation at 334-452-4000. Call Floyd Hunter Injury Law, because the right lawyers make a real difference.
https://floydhunter.com/dont-downplay-the-risks-of-drowsy-and-aggressive-driving/
Intersection injury accidents are increasing in Mission Viejo and across the United States. According to a recent report by the National Highway Traffic Safety Administration (NHTSA) more than 36% of injury accidents occur in an intersection. The vast majority of accidents in an intersection are caused by an error in judgment by one or more of the drivers involved. “Inadequate surveillance” was noted as the most common source of intersection accidents. The failure to exercise caution and carefully observe traffic flow as a driver approaches an intersection accounts for more than 44% of intersection related crashes. Many of these types of accidents in Mission Viejo involve pedestrians and bicyclists. More than 10% of intersection accidents are caused by a driver who has made a false assumption about the behavior of other drivers . Making assumptions about what other drivers will do is a natural part of the driving experience. We expect those with whom we share the road to obey traffic laws, come to full stop or properly yield the right of way. Intersection injury accidents are increasing in Mission Viejo due to the failure to observe traffic laws and use turn signals or make other efforts to communicate what action you intend to take within the intersection. An additional 7% of these accidents are due to illegal driving behaviors such as speeding, accelerating through a yellow light or running a red light.. Another common cause of intersection injury accidents is distracted driving. The use of cell phones, texting and reading email or social media as well as common occurrences such as eating or applying makeup increase the risk of an accident. Intersection injury accidents are increasing in Mission Viejo and the best thing you can do to avoid them is to be pay close attention as you approach any intersection. If you or someone you love is injured in a motor vehicle accident we invite you to review the recommendations of our clients and contact us or call 949-305-1400 to speak with one of our experienced injury attorneys personally for a free consultation.
https://www.rjmlawfirm.com/intersection-injury-accidents-increasing-mission-viejo/
Peter Catania | July 20, 2021 | Car Accidents When a driver yields the right of way, they allow another driver to merge, turn, or continue in front of the driver’s vehicle. Common examples of yielding the right of way include: - Stopping for oncoming traffic at a yield sign - Allowing a car to turn left in front of you when the driver has the green arrow - Waiting until oncoming traffic is clear before turning left across opposing traffic lanes - Merging safely into traffic on the interstate or in multiple lanes of traffic - Pulling over to allow emergency vehicles to pass - Stopping at pedestrian crosswalks - Waiting for a break in traffic to pull out of a parking lot, side road, or driveway - Obeying traffic signals Failing to yield the right of way can result in a traffic ticket, fine, and points against your Florida driver’s license. In addition, it could increase your automobile insurance premiums, depending on the traffic offense and your driving history. Drivers who cause traffic accidents by failing to yield the right of way may be financially liable for damages caused by the collision. Drivers in Florida are expected to know and understand the laws governing the right of way in various traffic situations. The Florida Uniform Traffic Laws and the Florida Driver’s Handbook explain all instances when drivers must yield the right of way to other drivers. Causes of Failure to Yield the Right of Way Accidents Failure to yield the right of way is a common factor in car accidents. According to the Insurance Information Institute, failing to yield the right of way was the third most common factor in fatal traffic accidents in 2019. There were 158,688 traffic accidents in Florida during 2019 related to failing to yield the right of way. Of these, the FLHSMV reported 428 fatal car accidents and 3,518 incapacitating car wrecks. Human error is the most common cause of right-of-way accidents. Factors that contribute to the cause of right of way crashes include: - Not understanding the laws related to right of way - Distracted driving, including texting while driving - Speeding and reckless driving - Driving under the influence of alcohol or drugs - Fatigue or drowsy driving - Aggressive driving and road rage Additionally, improperly marked intersections and lack of road signs can also contribute to a right of way accident at an intersection. For example, a malfunctioning traffic signal might give two drivers the right of way simultaneously. In addition, a yield sign that fell over could contribute to an accident. Accidents Involving Failure to Yield the Right of Way In most cases, drivers get away with failing to yield the right of way each day. So unless you cause a traffic accident or a police officer witnesses your traffic offense, the only negative consequence you might experience is an angry driver shouting at you when you cut him off. However, if you cause a car accident because you failed to yield the right of way, you could be responsible for the accident victim’s damages. Under Florida’s no-fault insurance laws, a driver could be held liable for damages caused by a car wreck if the victim sustains serious injuries. Serious injuries are defined by statute as: - Death - Scarring or disfigurement that is permanent and significant - Loss of an important bodily function that is permanent and significant - Injuries that are permanent within a reasonable degree of medical probability If a person sustains any of the above injuries, the no-fault laws do not apply. The individual may proceed with a claim for damages against the driver who caused the car accident. What Damages Can I Recover for a Right of Way Crash? If another driver caused your right of way accident and you sustained serious injuries, you can pursue a claim against the driver for non-economic and economic damages, including: - The medical expenses associated with diagnosing and treating your injuries - The cost of assistance with personal care and household chores - Your pain and suffering, including mental, emotional, and physical suffering - Permanent impairments, disfigurement, and disability - Past, present, and future loss of income, including reductions in earning potential - Psychological injuries, such as PTSD and depression - Loss of enjoyment of life and reduced quality of life You must be able to prove that the other driver caused the crash, the crash caused your injuries, and you sustained damages. The value of your damages depends on your injuries, economic damages, and other factors. If you are partially at fault for a failure to yield the right of way accident, the value of your personal injury case decreases. Florida’s comparative negligence laws permit the court to reduce your compensation by the percentage of fault assigned to you for the cause of the crash. You are under a deadline for filing a claim related to car accidents. Seek prompt legal advice to avoid running out of time to file a claim. For more information, call us at (813) 222-8545 or reach out to us via email by visiting our contact us page.
https://www.cataniaandcatania.com/blog/what-does-it-mean-to-yield-the-right-of-way/
Sideswipe Car Accidents Can Be Deadly According to data collected by the Insurance Information Institute, 961 fatal accidents attributed to sideswipe collisions during 2017. These car accidents are typically thought of as minor incidents. However, they can be quite dangerous and cause severe injuries and fatalities. They are usually caused by one driver being at fault for failing to maintain his or her travel lane. An accident lawyer can advise accident victims whether they may be able to obtain compensation for their injuries, medical bills, and other losses. What Causes a Sideswipe Accident? Most sideswipe accidents are caused by one driver not maintaining his or her lane. Here are the most common reasons why a sideswipe could occur: - The driver is under the influence of alcohol or is driving while fatigued - Improper passing occurred because the negligent driver sideswiped the other vehicle while trying to overtake it - Failure to move over when coming upon a construction zone, prior car crash or to give emergency vehicles the right of way - The driver engaged in aggressive driving or road rage behaviors, including speeding weaving in and out of traffic, or trying to run a vehicle off the road - Distractive driving, such as texting on a cell phone, interacting with passengers, or any other behaviors that cause a driver to take their eyes off the road - Failure to check blind spot before changing lanes Proving Liability for a Sideswipe Car Crash Drivers are required to maintain their lane of travel unless they are making a lane change or making a turn. In Nevada, there are three elements needed for proving liability in a sideswipe or other type of crash. One, the injured party and their car accident lawyer must prove that the driver at fault owed the other driver a duty of care. This duty of care is to operate their motor vehicle in a safe and legal manner. Two, this duty was breached due to the negligent behavior of the other driver that caused the crash. This could include, driving while under the influence, distracted driving, or failure to maintain their lane. And three, the breach caused the accident that caused the victim to be injured and incur medical expenses. When these three conditions are met, an injured victim could seek compensation through a legal claim against the negligent driver’s insurance policy through either a personal injury lawsuit or settlement.
https://fightforthelittleguy.com/blog/sideswipe-car-accidents/
From failure to yield to speeding through a crosswalk, there are a number of factors that contribute to pedestrian accidents across the country. Tragically, these accidents continue to claim lives in Houston and throughout Texas. When a pedestrian is struck by a vehicle, they may have to deal with a wide variety of problems, such as a debilitating personal injury or costly medical bills. After someone is hit by a drunk or distracted driver and fortunate enough to survive, they should do everything they can to move forward and ensure that the irresponsible driver is held accountable. After all, when drivers cause a serious crash, they cannot be let off the hook. Law enforcement officials say a 59-year-old woman was driving her car in Tyler when she struck a 46-year-old pedestrian. The pedestrian, a man who was attempting to cross the street in a wheelchair, suffered serious injuries as a result of the collision. The driver of the vehicle was not cited and did not show any signs of intoxication. The pedestrian is in the hospital recovering from the accident, which occurred shortly before 7 p.m. close to the intersection of 2nd Street and South Vine. Whether someone is struck by a vehicle in a hit-and-run accident or has a difficult time paying medical expenses after a collision, pedestrians face a number of hardships when their life is turned upside down because of a negligent driver. As a result, people who have experienced these difficulties firsthand should closely evaluate their legal options and may want to consider talking with an attorney.
https://www.grimesfertitta.com/blog/2014/12/texas-pedestrian-hurt-in-accident.shtml
Car Accidents at Intersections Did you know that approximately 2.5 million auto accidents take place at intersections every year, according to the Federal Highway Administration? Sadly, these accidents are common and often result in serious injuries and property damage. If you have been in a car accident at an intersection, you may want to file a claim. The personal injury attorneys at Chanfrau & Chanfrau can help car accident victims secure the compensation they deserve following an accident. Contact our practice in Palm Coast or Daytona Beach, FL to schedule a time to have one of our attorneys review your case. Accidents at Intersection in Florida According to the National Highway Traffic Safety Information, a total of 1,134 fatalities occurred at an intersection or within an approach to an intersection in Florida in 2017. Sadly, the number of fatalities at intersections has increased during the last five years: - 2013: 764 - 2014: 803 - 2015: 1,009 - 2016: 1,043 - 2017: 1,134 In Volusia County, 45 people died in intersection crashes in 2017, up from 25 in 2013. In Flagler County, 12 people died in intersection crashes in 2017, up from 5 in 2013. Types of Collisions at Intersections There is always a degree of chance involved in driving. You may be going the speed limit and alert to all your surroundings, and still end up in an accident. The following types of accidents commonly occur at intersections: - Rear end crashes: Rear end crashes occur when a driver does not stop in time after the driver in front has stopped. - Left-hand turn collisions: If a driver turning left does not see or misjudges the speed of an oncoming vehicle, it can increase the risk of an accident. - T-bone collisions: A driver running a light may crash into the side of a driver legally passing through the intersection. Causes of Collisions at Intersections There are many potential causes for these collisions. Generally, auto accidents at intersections are the result of negligence on the part of a driver. This may involve: - Running a red light: Sometimes drivers speed up when a traffic light turns yellow in an attempt to beat the light. Doing so can result in a collision. - Failing to yield: Drivers sometimes mistakenly assume they have the right-of-way when they do not, leading to crashes. - Aggressive driving: Speeding, ignoring traffic signals, and dangerous passing are just a few aggressive maneuvers that lead to intersection collisions. - Distracted Driving: The leading cause of all auto accidents, distracted driving can come in the form of texting, speaking on the phone, and playing with the radio. Injuries Resulting from Intersection Car Accidents Unfortunately, auto accidents at intersections often lead to serious and lasting injuries, including: - Bone fractures - Lacerations - Fractured ribs - Spinal cord injury - Traumatic brain injury - Internal organ injury - Paralysis - Death How a Car Accident Attorney Can Help Car accidents at intersections cause physical distress, and they take a toll on the victim financially. We must show that the other driver was responsible for your injury in order to obtain the compensation you are due to cover your medical costs, lost wages, and other expenses. Our experienced attorneys analyze all available evidence, including police reports, photos of the accident, and witness statements to prove the other driver was at fault. Contact a Car Accident Lawyer If you were injured in an auto accident at an intersection, contact a personal injury attorney right away. The car accident lawyers at Chanfrau & Chanfrau can review your case. Contact us online or call (386) 258-7313 to schedule a case evaluation.
https://www.chanfraulaw.com/blog/2019/03/19/car-accidents-at-intersections-197219
Car and truck accidents in Bradenton can cause fatal injuries and are sadly all too common. According to the Florida Department of Highway Safety and Motor Vehicles (FHSMV), around 400,000 motor accidents happen in the State of Florida every year, where truck crashes account for 1 in 10 national highway crashes. Intersections are a particularly common place for accidents to occur, with some of the most common causes for crashes being speeding, running a red light, or failing to yield to the flow of traffic or correct right of way. All of these have the potential to cause dangerous collisions. Part of the injustice of truck accidents is that it is often other drivers, rather than the truck driver, who suffer the most from these collisions. Research has shown that, of the 2,500 people killed in truck accidents in 2018, only 500 of the fatalities were truck drivers. This is because the driver is more likely to be protected from harm or death from inside a truck. If you or a loved one has been involved in a crash in Bradenton at an intersection involving a truck, then you should seek advice from a personal injury lawyer as soon as possible. Here at The Law Place, our team has over 75 years of combined experience dealing with cases just like yours, and hundreds of our clients have successfully won compensation for accidents that weren’t their fault. Contact us today to arrange a free consultation with one of our accident lawyers. Our phone lines are monitored 24hrs a day and 7 days a week for the convenience of our clients, so call today at (941) 444-4444. Common Causes of Truck Intersection Accidents Intersections are a common location for all types of automobile crashes and can be caused by a variety of factors. There could be a hazard on the road, for example, or harsh weather conditions that can make driving safely more difficult. Unfortunately, most collisions at intersections could have been avoided if it weren’t for the negligence of one or more parties. The attitude of drivers at intersections can lead to collisions, particularly if drivers believe that the rules of the road somehow do not apply to them. For example, drivers should always slow down to a stop when a traffic light turns yellow, and yet drivers will often accelerate instead to beat the red light. This dangerous behavior is the type of reckless driving that can lead to a T-bone or head-on collision at an intersection. Some common types of reckless driving behavior that can lead to a collision include: - Negligent driving – Driving in a way that ignores the safety of other drivers is extremely dangerous and can result in a fatal accident. This can include improper lane changing, road rage, distracted driving, speeding, or cutting off another driver. - Failure to obey traffic signals and signs – Speeding past a stop sign or running a red traffic light is a serious offense and puts the lives of drivers and passengers at risk. - Failure to yield the right of way – Extremely dangerous T-bone collisions can occur when a driver pulls out into traffic or performs a sharp left turn that cuts off another vehicle. These crashes often happen at high speed, making them even more catastrophic. - Driving Under the Influence (DUI) of drugs or alcohol – If a driver is under the influence of a substance, they could have impaired judgment and slower reaction times, which make it more likely that they will drive recklessly or fail to react in time to avoid a hazard that could lead to a collision. - Fatigue – Truck drivers are pressured to drive long hours without taking many breaks and so are often tired while driving. This can also increase the likelihood of the driver making a mistake, leading to a crash. - Rushing – Commercial truck companies often pressure their truck drivers to make deliveries on strict time schedules. A driver might feel pressure to speed in order to make sure they reach their destination on time. All drivers in Florida have a duty of care when driving, i.e., the legal and moral obligation to ensure the safety and wellbeing of others on the road at all times. Under Florida Statute 316.208, driving laws apply to all motor vehicles, including trucks. If you have been in an accident caused by a failure to obey the rules of the road or neglectful driving, then you deserve justice for the suffering it has caused you. Truck accidents can also occur because of negligence on the part of the truck company, for example, a failure to properly maintain the vehicle. Accidents can happen if the safety systems such as the tires, brakes, and axles are not regularly checked and maintained. An accident could also be caused by a failure to properly secure the cargo to the truck, as this can influence the turning speed of the truck, and falling cargo can be a serious hazard to other drivers. If you or someone you know has been in a truck collision in Bradenton, FL., that was caused by negligence, then you should contact a reputable law firm as soon as possible. Our accident attorneys have dealt with countless accident cases involving trucking companies and with fighting on behalf of our clients to seek compensation for injuries or damages caused by trucks. Contact The Law Place today to schedule a free consultation with an accident attorney. They will give you free legal advice to help you decide what next steps to take. For a free legal consultation with a truck accidents at intersection lawyer serving Bradenton, call 941-444-4444 Damage Caused by Truck Accidents All types of car accidents are dangerous, but accidents involving trucks are particularly serious. Trucks in Florida can be huge in size, such as 18-wheelers and dump trucks, meaning they have the potential to cause a lot of damage in case of a collision. The cargo on trucks also contributes to their huge weight, which can make a crash more likely. This is because a vehicle with a large weight will take a longer time to break, making it more difficult for a truck driver to avoid an accident if a driver in front of them brakes suddenly or if there is an unexpected hazard. Trucks also have larger blind spots than average cars. This means that if a careless driver tried to overtake a truck driver without being aware of this blind spot, this could lead to an accident, particularly if the truck driver tries to change lanes without being aware of the driver. Bradenton Truck Accident at Intersection Lawyer Near Me 941-444-4444 Common Injuries Caused by Truck Accidents in Bradenton Truck accidents usually cause much more severe injuries than regular car accidents because of the size and weight of trucks. Some common injuries to result from accidents involving trucks include: - Traumatic brain injuries - Spinal cord injury - Severe burns - Crushed or broken bones - Internal injury and organ damage - Paraplegia or quadriplegia If you have been injured in a truck accident at an intersection, then you are no doubt enduring a lot of pain and suffering in your recovery, as well as mounting medical bills that might not be covered by regular medical insurance. Call us today for no-obligation, free legal advice from one of our accident attorneys. They can help you calculate how much you could be owed in compensation for your accident injuries. Click to contact our Bradenton Truck Accident Lawyers today What Damages Can a Personal Injury Lawyer Help Me to Claim? If you have been involved in a truck accident at an intersection in Bradenton, FL., then we understand the amount of suffering you must be experiencing. Car accidents involving trucks can lead to life-changing injuries and extensive damage to you as well as your wallet. If you have been seriously injured from the crash, it is unlikely that your insurance policy will cover all of the damages. A personal injury lawyer can help you to calculate the damages caused by your truck accident to see if you could benefit from an injury claim. Some damages that you may be able to claim include: - Medical bills - Lost wages - Lost potential future wages - Property damage - Pain and suffering - Wrongful death If you have been injured in a truck accident at an intersection, we understand that no amount of money can make up for the pain you have experienced. However, being more economically secure will ease the stress of recovery and make life easier for you and your family. Contact our law firm today to see how much you could be owed in compensation. Our team of accident attorneys has dealt with many similar accident cases involving trucking companies. So call today for free legal advice from a personal injury lawyer. Complete a Free Case Evaluation form now Who Can Be Held Responsible for a Truck Accident? All drivers in Florida, including trucks, have a duty of care to ensure the safety of other drivers at all times, as per Florida Statute 316.208. Unfortunately, reckless driving behavior or negligence from the trucking company can lead to devastating accidents that cause serious injury and even death. Florida is a no-fault state, meaning that the payout of insurance companies following an auto accident is calculated as a percentage of how much each party can be proved to be to blame. For this reason, it is important to figure out who was at fault in any accident. However, this can be very complicated in a truck accident situation, as multiple parties could be held responsible for the collision. - The truck driver – As mentioned above, reckless driving or distracted driving can lead to a crash. Many trucking companies will define their drivers as independent contractors in order to reduce their liability in case of an accident. - The trucking company – If the truck crash was caused by a failure to properly maintain the vehicle, then the trucking company can be held responsible. - Cargo loaders – Trucks can carry up to 40 tonnes of weight. If this is not loaded correctly, then movement can cause the weight distribution of the truck to shift, which can cause the truck driver to lose control. The loaders of the truck could then be held accountable. It is also important to be aware of the statute of limitations that apply in personal injury claim cases. Florida, as with all states, has a limit to the amount of time you have to file a lawsuit following an accident. As stated in Florida Statute 95.11, a person has four years to file a personal injury lawsuit from the date of the accident. You will not be able to make a claim if more than 4 years have passed, apart from certain situations where injuries prevent a person from making a claim in that time. Although you have 4 years to make a claim, it is highly recommended that you start the process as soon as possible following the truck accident. The longer you wait, the more difficult it is for your accident lawyer to gather valuable evidence and build your case. The sooner you begin your claim, the more likely you are to receive a higher payout for your case. So don’t hesitate, contact us today for a free consultation with an accident attorney. What Can a Personal Injury Lawyer From The Law Place Do for Me? If you were involved in a truck accident at an intersection in Bradenton, it is highly recommended that you seek legal representation from a truck accident lawyer from a reputable law firm. If you call us, you will receive a free case consultation with one of our attorneys. They will go through the details of your case with you and help to calculate an estimate of the damages and injuries you could claim for, and give you advice about the next steps to take to file a lawsuit. If you decide to work with us after that, then your assigned attorney will be by your side for the entire process of the lawsuit with support and legal advice. You can let your attorney handle the investigation of the accident, all communication and negotiation with insurance companies, and the necessary paperwork. Investigating the accident will include collecting valuable evidence from the scene, inspecting the damage on vehicles involved in the accident, speaking to witnesses, checking CCTV footage, and using accident reconstruction technology to analyze the roadway at the scene. Dealing with insurance companies can be particularly tricky without a personal injury lawyer, as these companies will often do all they can to minimize their payout, including trying to shift the blame on you or claim that your injuries were not caused by the accident. Luckily we have a lot of experience in dealing with insurance companies and will ensure that you are not taken advantage of. Here at The Law Place, we have over 75 years of combined knowledge and experience in managing cases just like yours and will help fight for your rights and ensure that the guilty party is held responsible for the suffering and damage caused by your truck accident. Our phone lines are monitored 24hrs a day, so call now on (941) 444-4444 for free legal advice regarding your case.
https://www.thelawplace.com/areas-we-serve/bradenton-fl/truck-accident-lawyer/accidents-at-intersection/
When driving in Michigan, one must be aware of the most common causes of automobile accidents. According to the Michigan State Police, the leading cause of car accidents in the state is distracted driving. There are many forms of distracted driving, but the most common is texting while driving. This is especially true for younger drivers. You are texting while behind the wheel is now the leading cause of death for teenage drivers in the United States. If driving in Michigan, put your phone away and focus on the road. It could save your life. As a road user, you should also be alert to the signs of distracted driving in other motorists. If you see a driver not paying attention to the road, be prepared to take evasive action. The most common cause of automobile accidents in Michigan is distracted driving. Distracted driving is any activity that takes a driver’s attention away from the road. This can include talking or texting on a cellphone, eating, drinking, talking to passengers, grooming, using a navigation system, and more. Distracted driving is dangerous because it increases the risk of a crash. In fact, according to the National Highway Traffic Safety Administration (NHTSA), distracted driving was responsible for 3,166 deaths in 2017 alone. Below are more causes of automobile accidents in Michigan 1- Speeding Speeding is all about driving too fast for the conditions. ItIt’sne of the most common causes of accidents because it gives drivers less time to react to hazards. 2- Drunk driving Drunk driving is a severe problem in Michigan. In 2017, there were 290 drunk driving fatalities in the state. ThThat’searly one-third of all traffic deaths in Michigan that year. 3- Weather Bad weather can make driving conditions more difficult and increase the risk of an accident. Common weather-related accidents include those caused by snow, ice, sleet, and rain. 4- Road rage Road rage is aggressive or violent behavior exhibited by drivers. It’s a significant problem on Michiganroadsnd it can lead to severe accidents. 5- Mechanical problems Sometimes, accidents are caused by mechanical issues with the vehicle. This could be something as simple as a flat tire or a more severe case like faulty brakes. 6- Tailgating Tailgating is when a driver follows another vehicle too closely. This leaves little time to react if the car in front brakes suddenly. 7-Cell Phones Talking or texting on a cell phone takes a driver’s attention away from the road. It is essential only to use a cell phone when it is safe. 8- Driving Under the Influence Driving under the influence of drugs or alcohol is never a good idea. It puts yourself and others at risk. If you are going to drink, have a designated driver. 9-Fatigued Driving Fatigued driving is when a driver is too tired to be behind the wheel. This can lead to accidents because it decreases reaction time and impairs judgment.If you are injured in an accident caused by a distracted driver, you may be able to recover compensation for your medical bills, lost wages, and other damages. Car accident lawyers in Grand Rapids, MI, can help you understand your legal options and fight for the compensation you deserve.
https://backstageviral.com/what-is-the-most-common-cause-of-automobile-accidents-in-michigan/
Despite efforts to improve driver safety, including graduated licenses for young adults, driver error plays a leading role in the cause of car accidents. How Driver Error Contributes to Car Accidents Most car accident litigation revolves around claims of negligence. That is, the defendant driver is not accused of intentionally causing the accident, but is accused of making errors or omissions in driving conduct that created an undue danger of an accidental collision. Most errors that result in accidents are relatively minor mistakes that unfortunately produce serious consequences. Sometimes a driver's misconduct will be sufficiently irresponsible that the driver is cited for careless driving or, in cases that show indifference to the safety of others, even reckless driving, offenses that some states classify as criminal misdemeanors. Common driver errors include: - Disregard of Traffic Control Devices - A driver's failure to yield at a traffic control device -- most often a stop sign, yield sign, or traffic light -- can pose a significant risk to other vehicles that have the right-of-way. As such accidents often involve cars striking each other in a perpendicular manner, such as in a T-bone collision where one driver crashes into the doors of the other driver's car, the risk of injury is particularly great. - Failure to Yield - Beyond traffic lights, yield and stop signs, accidents arising from a failure to yield often occur at unmarked intersections, entry ramps, traffic circles, and points where lanes of traffic merge. Not everybody respects the rules of right of way, or pays attention to merging traffic. It is important to exercise additional care at such points of potential danger. - Dangerous Passing: Attempting to pass another vehicle on the shoulder, in a no-passing zone, where the line of vision of oncoming cars is obstructed, where oncoming traffic is dangerously close, or similar passing conduct may result in a car accident, in some cases involving a head-on collision. - Dangerous Turning: Attempting to turn from the wrong lane, or suddenly slowing or stopping in a traffic lane upon realizing that you are about to pass a desired intersection or exit ramp, can be extremely dangerous to other drivers. - Driving on the Wrong Side of the Road: Although sometimes it is tempting to do so in order to pass stopped traffic, and sometimes people accidentally turn the wrong way on a one-way road, it goes without saying that driving in an oncoming traffic lane can be extremely dangerous. - Reading While Driving: Attempting to read instructions, road maps, or other materials, whether on paper or on an electronic device, while driving a car. - Use of Electronic Devices: Attempting to change a tape or CD, dial a cellular phone, texting, using an inappropriate entertainment device (such as trying to watch a DVD while driving), and other similar acts can distract a driver from the road and increase the chance of an accident. - Maintenance Issues: Poor maintenance of a vehicle, particularly of its brakes, can contribute to accidents. Drivers are responsible to make sure that their cars are safe to drive. - Vehicle Lights: The failure to properly use turn signals, the failure to properly maintain headlights, brake lights, and signal lights, the failure to illuminate headlights. Although vehicle safety has improved, any car accident carries the risk of personal injury. Even with front and side-wall airbags, the fact remains that most car safety devices are designed to prevent injury from a front-end collision, and accidents may occur from any direction. Seat belts do not do much to prevent sideways movement. Dashboard airbags are also not of much use, and may not even deploy from a side impact. Inattention and Distraction Accidents may result from the driver's activity within a vehicle, such as their inattention to the road or distraction by their activity within the vehicle, or by the actions or behaviors of other occupants of the vehicle. Distraction may also result from the presence of a pet inside the vehicle, or from the presence of an insect, notably including a stinging insect such as a wasp or a bee. Distraction or inattention may also result from factors outside of the driver's control. For example a driver may be suddenly blinded by the sun or by oncoming headlights, and be temporarily unable to see an approaching object or hazard in the roadway. The presence of a distracting elements outside of the vehicle, such as a brightly lit, animated billboard, may draw the driver's attention away from other objects or hazards and increase the risk of an accident. Drivers are more likely to be unduly distracted from the task of operating their vehicles if they are tired, if they have consumed alcohol or sedating medication, or if they are elderly or medically infirm. A sleep-deprived person who operates a motor vehicle can suffer a level of impairment similar to that which results from alcohol or drug intoxication. Road Rage Accidents There is no question but that road rage contributes to car accidents. As road rage incidents often occur on highways and freeways, the accidents that result can be extremely serious, and can involve additional vehicles. Accidents may result from an angry driver's intentionally dangerous driving acts, including: - Brake-checking: braking suddenly in front of another car; - Tailgating: pulling up right on another driver's bumper; - Intentional contact: Trying to tap the other driver's bumper while the vehicles are in motion. Even when an angry driver is not attempting to retaliate against another driver, the driver is more likely to make mistakes while operating a vehicle. If you are being victimized by an angry driver, try to find a way to remove yourself from the situation: - Slow down or take an exit. - If the angry driver pursues you, try to pull into the parking lot of a police station or a busy business. If you are considering engaging in an act of road rage, you should also remove yourself from the situation, if necessary pulling over and stopping your vehicle until you have calmed down. When an accident results from road rage behavior, even if a case may be made that the driver intentionally caused the accident, due to insurance coverage issues the accident will often be characterized in litigation as having resulted from negligent conduct. Rear-End Collisions In most jurisdictions, a driver who rear-ends another car is presumed to have caused the accident. In most cases, that presumption is correct: The driver at the rear has followed another vehicle too closely, or hasn't paid proper attention to what is going on in the roadway ahead of his car, and doesn't notice that another car has stopped or slowed in front of him until it is too late to avoid collision. In higher speed rear-end collisions involving a line of cars, such as cars stopped at a traffic light, you may see the initial collision propel the stopped cars into each other, such that three, four, or even more cars become involved. The most common defense to a charge of negligence arising from a rear-end collision is the sudden emergency - that is, a claim that the car that was hit stopped suddenly and unexpectedly, or that something sudden and unexpected (such as a truck losing its load on the roadway) caused that car to come to a sudden stop, rendering the collision unavoidable. Where one or more cars are able to stop in reaction to a claimed sudden emergency, it is more difficult to make this claim, as claiming a sudden emergency will inspire the question, "If other drivers could safely stop, why couldn't you?" Persons injured in car accidents due to the mistakes of others should consult a personal injury lawyer to ensure that their rights are protected and that they receive appropriate compensation for their injuries.
https://www.expertlaw.com/library/car-accidents/driver-error.html