url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
http://physics.aps.org/synopsis-for/10.1103/PhysRevC.86.011304
|
# Synopsis: Neutron Knockout
Knocking out neutrons or protons from an isotope of oxygen provides a successful test of an equation that relates nuclear masses.
Unstable nuclear states that decay by emitting protons or neutrons provide important tests for models that are based on properties of nuclei close to the line of stability. They also permit tests of the isospin symmetry—a quantity related to the dominant component of the strong nuclear force—to determine if certain nuclear properties are unchanged if a neutron is replaced by a proton and vice versa.
One of the rarest decay modes of unstable nuclei is the essentially simultaneous emission of two protons. In a paper in Physical Review C, Marieke Jager at Washington University, Missouri, and collaborators report results from an experiment in which they observed two different $2$-proton decay modes. In a beam of an isotope of oxygen, ${}^{13}\text{O}$, either a neutron or a proton was knocked out, leaving ${}^{12}\text{O}$ and ${}^{12}\text{N}$, respectively. The $2$-proton decay of these to ${}^{10}\text{C}$ and ${}^{10}\text{B}$ gave valuable information on the mass and lifetime (equivalent to a property called the width of the state) of states in ${}^{12}\text{O}$ and ${}^{12}\text{N}$. The mass and width for the ${}^{12}\text{O}$ ground state turn out to be considerably smaller than what was known from previous measurements and resolve a discrepancy behind the theory for the width. The properties of the isobaric analog state in ${}^{12}\text{N}$—analogous to one in ${}^{12}\text{O}$, except that a neutron replaces a proton—are measured here for the first time.
These results permit a successful test of a $3$-parameter equation, called the isobaric multiplet mass equation, that relates the masses of nuclei in the same state (here ${}^{12}\text{Be}$, ${}^{12}\text{B}$, ${}^{12}\text{C}$, ${}^{12}\text{N}$, and ${}^{12}\text{O}$) with the same total number of nucleons, simply in terms of the respective numbers of protons and neutrons. – Rick Casten and John Millener
More Features »
### Announcements
More Announcements »
Nuclear Physics
Optics
Magnetism
## Related Articles
Nuclear Physics
### Focus: Proton-Neutron Equilibration Takes Just 0.3 Zeptoseconds
The equilibration of nuclei containing a large imbalance of protons and neutrons can occur in 3×10−22 seconds, according to experiments—important information for models of element-creation in supernovae. Read More »
Energy Research
### Synopsis: Starting Fluid for Laser Fusion
A laser-based fusion experiment demonstrates that liquid fuel capsules could rectify problems encountered with ice-based fuel capsules. Read More »
Astrophysics
### Focus: More Hints of Exotic Cosmic-Ray Origin
New Space Station data support a straightforward model of cosmic-ray propagation through the Galaxy but also add to previous signs of undiscovered cosmic-ray sources such as dark matter. Read More »
|
2017-03-24 16:01:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6547598838806152, "perplexity": 1077.8592134658777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188213.41/warc/CC-MAIN-20170322212948-00611-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://socratic.org/questions/how-do-you-evaluate-sqrt-b-2-4ac-for-a-9-b-10-c-1
|
# How do you evaluate sqrt(b^2-4ac) for a=-9, b=10, c=-1?
Jul 19, 2017
See a solution process below:
#### Explanation:
To evaluate this expression substitute:
$\textcolor{red}{- 9}$ for $\textcolor{red}{a}$
$\textcolor{b l u e}{10}$ for $\textcolor{b l u e}{b}$
$\textcolor{g r e e n}{- 1}$ for $\textcolor{g r e e n}{c}$
$\sqrt{{\textcolor{b l u e}{b}}^{2} - 4 \textcolor{red}{a} \textcolor{g r e e n}{c}}$ becomes:
$\sqrt{{\textcolor{b l u e}{10}}^{2} - \left(4 \cdot \textcolor{red}{- 9} \cdot \textcolor{g r e e n}{- 1}\right)} \implies$
$\sqrt{100 - 36} \implies$
$\sqrt{64} \implies$
$8$
|
2021-06-17 06:13:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8959810137748718, "perplexity": 10997.282428257555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487629209.28/warc/CC-MAIN-20210617041347-20210617071347-00222.warc.gz"}
|
https://systemcentertipps.wordpress.com/tag/powershell/page/2/
|
With the last Patch cycle from Microsoft we had a bigger problem with the patch MS15-018, which failed on a lot of servers. The patch also prevented all other following patches to be installed. It was downloaded correctly, but could only be installed manually by running the downloaded exe-file from the SCCM cache folder.
To automate this a bit, my colleague Mihaly Kolozsi created a PowerShell script based on my design ideas. A big thank to him for borrowing me his brain and time ;-).
## SCOM 2012: Check greyed out agents
Greyed out agents are can be a nightmare for a System Center Operations Manager admin. An agent gets greyed out if the Health Service is not communicating correctly with the Management Servers. Normally an alert should be created with the name “Health Service Heartbeat Failure” which indicates this status. But sometimes I see the situation that the alert was created, but also auto-resolved by the system after a while (because of an agent recovery etc.). The problem then is if the agent still stays in an unhealthy state but no new alert gets created. I see that from time to time if the agent is stuck or has resource problems. This situation needs to be solved quickly because during that time no monitoring on the agent side takes place.
So how can this be resolved?
I implemented this solution: The management servers already know which agents are greyed out, so I have created a rule which runs on the “All Management Servers Resource Pool” every 5 min (you can select another interval if you like). It checks which agents are greyed out but are not in maintenance mode and then checks for each agent if there is an open “Health Service Heartbeat Failure” alert. It adds the server to a list which will be populated in one alert with the name “Sample – greyed out agents”, if no alert was found.
The main logic of the rule bases on a Powershell script. Here is the part, with the logic – I have skipped everything around it (log function, SCOM module, etc.).
$TotalCount=0$list=””
$agentclass = Get-SCOMClass -Name “Microsoft.SystemCenter.Agent” # Find greyed out agents which are not in maintenance mode$agentobjects = Get-SCOMMonitoringObject -Class:$agentclass | Where-Object {($_.IsAvailable -eq $false) -and ($_.InMaintenanceMode -eq $False)} if ($agentobjects -is [Object])
{
$msg = “rnFound greyed out agents which are not in maintenance mode.”; Log -msg$msg -debug $debug -debugLog$debugLog;
# Go through agent list
foreach ($agent in$agentobjects)
{
$msg = “rn”+$agent.displayname
Log -msg $msg -debug$debug -debugLog $debugLog; #Go on if watcher state for the agent is unhealthy if((Get-SCOMClass -name “Microsoft.SystemCenter.HealthServiceWatcher”| get-scomclassinstance | Where-Object {$_.Displayname -eq $agent.DisplayName}).HealthState -ne ‘Success’) { # Find open Health Service Heartbeat Failure alert for the agent$alert=get-scomalert -name ‘Health Service Heartbeat Failure’ | where {($_.ResolutionState -ne 255) -and ($_.MonitoringObjectDisplayName -eq $agent.DisplayName)} # No alert for greyed out agent found if ($alert -isnot [Object])
{
$list+=”rn”+$agent.displayname
$msg=”rnThe agent “+$agent.displayname + ” has no open Health Service Heartbeat Failure alerts. Add to list.”
Log -msg $msg -debug$debug -debugLog $debugLog;$Totalcount++
}
}
}
}
You can find the rule in a small management pack called Sample.BaseMonitoring, which you can download here.
It is designed for SCOM 2012 SP1. Please test it in your development environment before you add it to production!
There are some clean up tasks a System Center Configuration Manager 2012 Administrator can perform on a regular basis. One should be to check which advertisements are expired.
There is no PowerShell Cmdlet for SCCM 2012 SP1 which could give me this information directly, so I have created a script, that can be used in two ways:
1. document which deployments are expired (only does not document the current assigned schedule) in a CSV format.
2. delete the expired deployments.
So you can call the script with different parameters.
Command line: Getexpiredadvertisements.ps1 -log [String] -sitecode [String] -siteserver [String] -document [Bool] -delete [Bool]
Example: Getexpiredadvertisements.ps1 -log “c:\it\expiredads.csv” -sitecode “ABC:” -siteserver “SCCM01” -document $True -delete$False
Possible Parameters:
• Log: Defines name and path of the written CSV file
• Sitecode: SCCM Site Code, Example ABC:
• Siteserver: SCCM Site Server Name
• Document: Defines, if expired Advertisements get documented to the CSV file, possible values: $True/$False
• Delete: Defines, if expired Advertisements get deleted in SCCM, possible values: $True/$False
Requirements:
• Run this script in PowerShell x86
• The script is tested with PowerShell 2.0 and SCCM 2012 SP1
• The Configuration Manager PowerShell Modul must be installed on the machine, where you run the script
## SCOM 2012: Daily Alert Owner Email Report
System Center Operations Manager has a nice way of handling alerts. You can assign an owner for an alert, who should be responsible to resolve it.
But how does the owner get notified, that he/she is assigned to this alert now? Ok, you can setup a subscription, but this would send out for example an email for each alert.
I would like to have one email by owner with all alerts listed, which are assigned to him/her on a daily basis.
I have created a PowerShell script for that, which can be scheduled through a task on a management server.
Background
If you want to set the owner, then you can click on the change button.
SCOM connects to AD and adds the UPN (UserPrincipalName) of the given account to this field, i.e. [email protected]
The script reads all open alerts with a critical or warning severity and an owner which contains “@” – some management packs already fill the owner field with additional information. So the “@” indicates that the owner field is set manually.
To be able to send out a report through email to the assigned owner, there must be an email address entered in the Mail field of the Active Directory user ID.
The email to the owner looks like this:
Thanks to Jason Rydstrand, I took parts of his SCOM2012Health-Check script to build my report email with a HTML table.
## Orchestrator 2012: Check SCCM maintenance window and set SCOM maintenance mode
Everyone who uses System Center Configuration Manager 2012 and System Center Operations Manager 2012 knows the problem of setting the server into maintenance mode when patching or software deployment needs to take place.
With System Center Orchestrator 2012 you get the integration packs for both systems and the option to create a workflow for this task. My intetion for this was to use the maintenance windows which are defined on the collections. During this timeframe software updates and deployments can be performed on the servers incl. reboots. So it would be good to set the servers into maintenance mode in SCOM. I only focussed on general maintenance mode windows not OSD ones and non recurring windows.
Here is the summary of the workflow I have created:
The workflow runs every 2 minutes. It reads a text file on the runbook server with all collection ids it should check, then checks if the collection has a maintenance window defined, that will start within the next 10-15 minutes. If yes, then it gets the collection members in SCCM, gets the FQDN for the server and starts the maintenance mode in SCOM. If successful it writes a log file otherwise it tries again to set the maintenance mode with the Netbios name.
Diagram:
Most of the parts are standard activities, so I only describe the “Get Maintenance Window” activity, which runs a PowerShell script on the Runbook server. This activity needs to run with a user that has SCCM permissions, otherwise it will provide no result. It only will have output data, if the maintenance window will occur within the next 10-15 minutes. So the link to the Get Collection Members activity should have the following include entry: Pure Output from Get Maintenance Window matches pattern .+
Here is the command line for the Get Maintenance Window activity:
cmd.exe /c | c:\Windows\system32\WindowsPowerShell\v1.0\powershell.exe –c “function WMI-DateStringToDate($time) { [System.Management.ManagementDateTimeconverter]::ToDateTime($time);};$collsettings = ([WMIClass] ‘\\SCCM Server FQDN\root\SMS\site_SCCMSiteCode:SMS_CollectionSettings’).CreateInstance();if($collsettings -is [Object]){$collsettings.CollectionID = ‘Link to Line Text of previous activity’;$collsettings.get();$windows=$collsettings.ServiceWindows;if ($windows -is [Object]){$now=Get-Date;Foreach ($window in$windows){$Time=WMI-DateStringToDate($window.StartTime);if (($window.IsEnabled -eq$True) -and ($window.ServiceWindowType -eq ‘1’) -and ($window.RecurrenceType -eq ‘1’)){if (($now.AddMinutes(15).compareto($Time) -eq ‘1’) -and ($now.AddMinutes(10).compareto($Time) -eq ‘-1’)){$Duration=$window.Duration+15;write-host ($Time.ToString(),$Duration) -separator ‘;’}}}}};”
Attention! The command line should not have line breaks! Otherwise it will not work within this activity.
For better readability I post the script here also with line breaks and comments:
param($SMSSiteCode,$SMSManagementServer, $COLLECTION_ID) # convert WMI date to DateTime format function WMI-DateStringToDate($time)
{ [System.Management.ManagementDateTimeconverter]::ToDateTime($time)} # get collection settings (incl. Maintenance Windows)$collsettings= ([WMIClass] \\$SMSManagementServer\root\SMS\site_$($SmsSiteCode):SMS_CollectionSettings).CreateInstance() if($collsettings -is [Object])
{
$collsettings.CollectionID =$COLLECTION_ID
$collsettings.get()$windows=$collsettings.ServiceWindows if ($windows -is [Object])
{
$now=Get-Date Foreach ($window in $windows) {$Time=WMI-DateStringToDate($window.StartTime) # only check general maintenance and non recurring windows if (($window.IsEnabled -eq$True) -and ($window.ServiceWindowType -eq‘1’) -and ($window.RecurrenceType -eq‘1’)) { # check if starttime is within the next 10-15 min. if (($now.AddMinutes(15).compareto($Time) -eq‘1’) -and ($now.AddMinutes(10).compareto($Time) -eq‘-1’)) { # add 15 min to duration as buffer$duration=$window.Duration+15; write-host ($Time.ToString(),$Duration) -Separator ‘;’ } } } } } Another thing to mention: Please add an exclude to the link between “Get Collection Member” and “Get FQDN” for your Management Servers: Member Name from Get Collection Member equals SCOMMGServerName. Then they will not be set into maintenance mode if they are members of the checked collections. Update I found some problems with the daylight saving settings on the runbook server. We use UTC maintenance windows in SCCM. With daylight saving the local time of the runbook server gets adjusted but the maintenance window stays in standard UTC. The script compares the local time with the maintenance window. With the old version it sets the maintenance window at the wrong time when daylight saving is enabled. Therefore I had to adjust the script. Here is the new version. The italic entries are new. cmd.exe /c | c:\Windows\system32\WindowsPowerShell\v1.0\powershell.exe –c “function WMI-DateStringToDate($time) { [System.Management.ManagementDateTimeconverter]::ToDateTime($time);};$collsettings = ([WMIClass] ‘\\SCCM Server FQDN\root\SMS\site_SCCMSiteCode:SMS_CollectionSettings’).CreateInstance();if($collsettings -is [Object]){$collsettings.CollectionID = ‘Link to Line Text of previous activity’;$collsettings.get();$windows=$collsettings.ServiceWindows;if ($windows -is [Object]){$now=Get-Date;$universal=$now.ToUniversalTime().AddHours(([System.TimeZoneInfo]::Local).baseutcoffset.hours);$diff=($now.subtract($universal)).Hours;Foreach ($window in$windows){$Time=WMI-DateStringToDate($window.StartTime);if (($window.IsEnabled -eq$True) -and ($window.ServiceWindowType -eq ‘1’) -and ($window.RecurrenceType -eq ‘1’)){if (($now.AddMinutes(15).compareto($Time.AddHours($diff)) -eq ‘1’) -and ($now.AddMinutes(10).compareto($Time.AddHours($diff)) -eq ‘-1’)){$Duration=$window.Duration+15;write-host ($Time.ToString(),$Duration) -separator ‘;’}}}}}”
Here is the link to the runbook.
## Orchestrator 2012: Reset SCOM 2012 monitor for closed alert
Everyone who works with System Center Operations Manager 2012 knows the problem of closed alerts where the monitor has not been reset first. The monitor will stay in the unhealthy state and no new alerts will be created anymore until the monitor gets reset.
You can create a scheduled task with a script on a management server or use Orchestrator for it. I found this blog which describes how to use the “Monitor alert” activity and then run a script afterwards. http://blog.scomfaq.ch/2012/05/05/reset-monitor-using-scom-2012-and-orchestrator-a-must-have-runbook/
I like the “Monitor alert” activity but I would like to reduce the number of scripts which connect to the management group.
So I have created another runbook.
The first activity “Check every 5 min” triggers the runbook every 5 min. I think that is a good timeframe to check for closed alerts.
The next activity “Reset Monitor” runs on the Runbook server. It uses PowerShell and imports the SCOM 2012 module, so this must be installed on the Runbook Servers and the execution policy should be set to remotesigned.
Here are the details of the activity:
$Alertname=@();$State=@();
$Displayname=@(); # Import Operations Manager Module and create Connection Import-Module OperationsManager; New-SCOMManagementGroupConnection %ManagementServerName%;$alerts=get-scomalert -Criteria “Severity!=0 AND IsMonitorAlert=1 AND ResolutionState=255″| where {$_.LastModified -ge ((get-date).AddMinutes(-5)).ToUniversalTime()} if ($alerts -is [object])
{
foreach ($alert in$alerts)
{
$monitoringobject = Get-SCOMClassinstance -id$alert.MonitoringObjectId
# Reset Monitor
If (($monitoringobject.HealthState -eq ‘Error’) -or ($monitoringobject.HealthState -eq ‘Warning’))
{
$monitoringobject.ResetMonitoringState()$State+=$monitoringobject.HealthState$Displayname+=$monitoringobject.displayname$Alertname+=$alert.Name } } } The script gets all closed alerts from monitors with severity ‘Warning’ or ‘Critical’ within the last 5 min and only resets the monitor if it is still in ‘Error’ or ‘Warning’ HealthState. You could use this script also for a scheduled task on a management server. The published data is Alertname, State, Displayname, you could also publish other data, but that was what I needed for troubleshooting. ## SCOM: Disable Active Directory integration on an agent with PowerShell Some companies use Active Directory integration for agent assignement in System Center Operations Manager. In some circumstances it can be that you have to remove the Active Directory integration from the agent (example: do not use AD integratrion on domain controllers or Exchange servers), perhaps if you have used software distribution without different options for special server classes or if you want to get rid of AD integration. I have written a PowerShell script, that can be run on an agent to remove the AD integration and reenter the management group(s) as manual.$object=New-Object-ComObject‘AgentConfigManager.MgmtSvcCfg’;
if ($object-is [Object]) { #only change agent if active directory integration is enabled if($object.GetActiveDirectoryIntegrationEnabled())
{
#get all ad integrated management groups
$MGs=$object.GetManagementGroups() | where {$_.IsManagementGroupFromActiveDirectory -eq$True};
$object.DisableActiveDirectoryIntegration();$object.ReloadConfiguration();
Foreach($MG in$MGs)
{
$object.AddManagementGroup($MG.managementGroupName,$MG.ManagementServer,$MG.managementServerPort);
}
}
}
& net stop healthservice
& net start healthservice
|
2020-02-23 17:32:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3179822266101837, "perplexity": 6873.881460233776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145818.81/warc/CC-MAIN-20200223154628-20200223184628-00517.warc.gz"}
|
https://www.cableizer.com/documentation/d_b3/
|
# Distance c of multi-layer backfill
Distance c to calculate the resistance of multi-layer backfill.
Symbol
$d_{b3}$
Unit
m
Formulae
$\sqrt{{s_{b3}}^2+{w_{b4}}^2}$
Related
$s_{b3}$
$w_{b4}$
Used in
$R_{q11}$
$R_{q21}$
|
2022-01-24 07:43:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8181958198547363, "perplexity": 14198.510523859666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304515.74/warc/CC-MAIN-20220124054039-20220124084039-00678.warc.gz"}
|
https://cjds.uwaterloo.ca/index.php/cjds/article/download/317/537
|
# Somewhere to live, something to do, someone to love: Examining levels and sources of social capital among people with disabilities
## Janet WilliamscCommunityworks, Inc.janetw[at]mindsmatterllc[dot]com
### Abstract
Social capital has emerged as an important ingredient in the maintenance of physical and mental wellbeing. Although this construct has been studied within the disability community, a comparative analysis of social capital among individuals with disabilities and the general population is missing from the literature. Also sparse is an investigation into the sources from which people with disabilities draw their social capital. Building on the seminal work of political scientist Robert Putnam, a modified version of the Harvard Kennedy School’s Social Capital Community Benchmark Survey was administered to 218 adults with high support needs living with a broad range of disabilities and currently receiving support from one of six disability organizations across the United States and Canada. Chi-squared analyses were conducted to test for differences between observed frequencies and expected frequencies obtained from general population surveys on six key measures of social capital. Results indicate that, in most areas, social capital levels among individuals with disabilities were lower when compared with those of general population respondents. In cases where social capital levels were higher than or comparable to general population respondents, an incongruity between subjective evaluations and quantitative reports, and/or support received from non-normative sources such as parents and professionals are likely explanations. Our findings support continued efforts by rehabilitation professionals to facilitate community integration for people with disabilities through the promotion of friendships and other social relationships in a variety of contexts.
### Keywords
• Disability
• Social capital
• Social support
• Community integration
• Wellbeing
During a 1990 presentation at the Pacific Coast Brain Injury Conference in Vancouver, B.C., physiatrist Sheldon Berrol (as cited in Flaherty, 2008) eloquently noted that what is most important to us all is to have somewhere to live, something to do, and someone to love. For individuals living with disability, however, these elements are frequently missing. Although major reform in education, housing, transportation, vocational training, transition services and rehabilitation has greatly improved quality of life for people with disabilities, many continue to be isolated and excluded from social activities, from employment opportunities, and from their communities (Flaherty, 2008).
Over the past several decades, social capital has gained currency as a key determinant of physical and mental wellbeing. As a theory, social capital offers a conceptual measure of the instrumental value of social relationships and has significance for social work and rehabilitation practice within the disability community. Professionals in these fields are in a vital position to assist in the social capital building of the people they support; however, research has yet to undertake a comparative analysis in key areas of social capital between individuals with disabilities and the general population. There is also little known about the sources from which people with disabilities draw their social capital. This knowledge may be useful in better understanding the social capital needs of this population and may assist in the development of targeted programs for this purpose. The present study sought to address this paucity in the research literature by shedding light on these important but understudied lines of inquiry.
### Background
That people with disabilities have historically faced pervasive inaccessibility that perpetuated their exclusion from community participation is well known (Barnes & Sheldon, 2010; Galer, 2014; Isaac, Raja, & Ravanan, 2010). The response to the “problem” of disability traditionally included funneling individuals into residential or long-term care facilities and leaving them in the care of professionals and policy makers (Galer, 2014). Growing public opposition that rejected segregated models of care eventually gave way to deinstitutionalization and the development of community-based services, accommodations, and supports that centered instead on the capabilities and rights of people with disabilities. This shift challenged conventional medical model discourses of disability as inherent impairments in favour of social models that view disability largely as the result of unexamined barriers and discriminatory attitudes (Benedet & Grant, 2014). Rehabilitation professionals figured prominently in this shift, teaching individuals functional and adaptive skills and expanding opportunities for social inclusion and greater participation in community life.
Nonetheless, the goal of helping individuals better integrate into their communities through the delivery of more holistic services has not always been successful. National surveys report that people with disabilities continue to face high levels of institutionalization, unemployment and social disconnection, lower levels of life satisfaction, and that a disproportionate number socialize less often with friends, relatives and neighbours, and partake less frequently in community activities (Condeluci, Ledbetter, Ortman, Fromknecht, & DeFries, 2008; National Organization on Disability, 2010). Over the last several decades, the dynamic interplay between individual, community, and institutional level factors has gained prevalence in understanding and explaining variations in human functioning. In 2001, the World Health Organization developed the International Classification of Functioning, Disability and Health (ICF), a framework for the conceptualization, classification and measurement of health and health-related domains within disability. According to the ICF model, the health of people with disabilities is a multidimensional experience. Aside from the exclusively biological processes that determine disability, psychosocial and environmental influences are also implicated in how individuals experience their disability. From the ICF perspective, the ultimate goal for people with disabilities is not merely enhancing their functionality but also their full inclusion and participation within the community. The expansion of social networks, therefore, may be regarded as a central tenet of the rehabilitation agenda.
### The Role of Social Capital
The importance of social connections cannot be overstated. Humans are by nature social creatures; our relationships are a fundamentally important aspect of our wellbeing (Irvine, 2007; Kroll, 2011; O’Brien, 2012). Social capital theory advocates that the support systems provided by our networks of family, friends, neighbours, coworkers, acquaintances and other associations have value and offer benefits in concrete and measurable ways. Political scientist Robert Putnam has written extensively on the concept of social capital and defines it as “our relations to one another” (Putnam, 1995, p. 666); the “connections among individuals - social networks and the norms of reciprocity and trustworthiness that arise from them” (Putnam, 2000, p. 19). Major social institutions such as religious organizations (Stone, Cross Purvis, & Young, 2003), neighbours (Gambrill & Paquin, 1992; Walker & Hiller, 2007; Ziersch, Baum, MacDougall, & Putland, 2005) and employment (Potts, 2005; Walker et al., 2007; Williams, 2008) have historically been important repositories of support and emotional wellbeing as they present opportunities for socializing and are often antecedents to the development of relationships. Work settings, for example, are the second most important social unit in many people’s lives following family (Stewart, 1985) and workplace relations have traditionally been among the most common forms of civic connectedness (Putnam, 2000). Social connections also impact career mobility (Kulkarni, 2012) and it is estimated that between 40-70% of those seeking employment find their jobs through others in their social network (Parris & Granger, 2008).
In 2000, Putnam and his colleagues at the Harvard Kennedy School of Government undertook the first systematic attempt to measure social capital within communities across America. Since then, Putnam’s research has revealed that levels of civic engagement - how much residents trust others, socialize with others, and join groups - predict quality of life indices far better than either income or educational level. The role of community participation and its attendant benefits are central to Putnam’s theorizing and the importance of social networks for both objective and subjective wellbeing has been well documented by previous research. Indeed, the literature is remarkably consistent in the conclusion that the more connections we form, the more opportunities we have, and the better able we are to deal with the stressors of life (Condeluci et al., 2008; d’Hombres, Rocco, Suhrcke, & McKee, 2011; Folland, 2007; Hawe & Shiell, 2000; Putnam, 2000; Rocco & Suhrcke, 2012; Scheffler, Brown, & Rice, 2007). However, Putnam’s research has also revealed that levels of social capital have deteriorated significantly over the past several decades, leaving contemporary citizens disconnected from family, friends, neighbours, and even democratic structures. Some researchers have suggested that this social isolation - the lack of social capital - actually causes disease (Cohen, 2004). Social isolation has long been recognized as a major risk factor for depression (Karp, 1994; Victor, Scambler, Bowling, & Bond, 2005) and the higher rates of depression, suicide and general malaise among today’s youth have been attributed to more time spent alone and fewer, weaker, and more fluid relationships (Putnam, 2000). Thus, high social capital, as considered in the present study, is viewed as both a natural motivator of human behaviour and a mechanism of health and wellbeing.
### The Challenge of Measuring Social Capital
Social capital has been covered thoroughly by a number of scholars and continues to be the subject of much controversy (Bordeau, 1986; Brisson, Roll, & East, 2009; Coleman, 1988; Lin, 2001; Putnam, 2000; Schuller, 2007; Webber, Reidy, Ansari, Stevens, & Morris 2015; Ziersch & Baum, 2004). Distinctions have been drawn between bonding, bridging, and linking varieties of social capital (Hawkins & Maurer, 2012; Mithen, Aitken, Ziersch, & Kavanagh, 2015), and disagreement exists over whether social capital is a collective social resource that benefits communities, or whether its benefits are associated with people, their personal networks, and support (Hawkins & Maurer, 2012; Poortinga, 2006). Others still argue that such studies are not studying social capital directly, but rather indirectly, through its causes and consequences (Appel et al., 2014). Further, because social capital is a multidimensional concept, it is often conflated with related constructs such as social support, social network, sense of community, community connectedness, quality of life, and civic participation, to name a few (Fowler, Wareham-Fowler, & Barnes, 2013; Hawkins & Maurer, 2012; Kawachi & Berkman, 2001; Lovell, Gray, & Boucher, 2015; Merriam & Kee, 2014; Saltkjel & Malmberg-Heimonen, 2014). Key terms such as “trust” and “community” are often difficult to quantify, making attempts to operationalize social capital and transition away from a purely theoretical understanding a challenge (Svendsen & Sorensen, 2006). This has resulted in considerable conceptual confusion and a general lack of consensus over what exactly constitutes social capital and if and how it can be measured. Inconsistencies in the literature regarding the positive relationship between social capital and health have been reported (e.g., Mithen et al., 2015; Murayama, Fujiwara, & Kawachi, 2012; Ziersch & Baum, 2004) and may be a reflection of differences in conceptual understandings and measurements.
It is often the case, however, that the most interesting and important questions are also the most difficult to study. This is particularly the case when dealing with the social, emotional, and interpersonal contexts within which human social activity takes place. Given the constraints of space, we are unable to engage in the larger debate surrounding social capital and its relationship to related constructs, though we do acknowledge their importance in the literature. The present study draws on the common themes identified in the work of Putnam (1995, 2000) who analyzed levels of social trust, participation in voluntary associations, and other forms of political and civic engagement as indicators of social capital. More broadly, we conceive of and employ social capital as an umbrella term for the many advantages an individual can acquire through membership in a social community and as an important factor in health and wellbeing.
### Social capital and disability
Global research has shown that people with disabilities experience poorer physical and psychological health than people without disabilities (World Health Organization and World Bank Group, 2011). Disability theorists who have studied social capital and related constructs such as social support and community engagement have demonstrated the deprivations faced by people with disabilities in these areas (Bramston, Bruggerman, & Pretty, 2002; Kampert & Goreczny, 2007; Kinne, Patrick, & Doyle, 2004; Morris, 2001). Though Putnam sampled the majority of sectors of American society, absent from his work is any mention of disability groups. Declining trends in civic engagement are especially relevant when applied to the disability community, which has historically experienced greater social isolation and lower social capital compared with the general population. Given the importance of social relationships for health and wellbeing, then, an exploration of the social lives of people with disabilities seems timely.
A common assumption is that social relationships are immaterial to individuals with disabilities either because they lack the ability to understand them or because they have too little in common with their nondisabled peers to develop meaningful relationships (O’Brien & O’Brien, 1993). However, people with disabilities who have friends are more likely to have a positive self-concept, better communication skills, healthier emotional functioning, more positive coping strategies and a better grasp of life skills (DeGeorge, 1998; Geisthardt, Brotherson, & Cook, 2002; Heiman, 2000; Schleien, Heyne, Rynders, & McAvoy, 1990; Stainback & Stainback, 1987). Although it is generally accepted that making friends is a simple and natural process, individuals with disabilities often do not make friends as easily and effortlessly as their non-disabled peers and tend, as a result, to have fewer friends and less stable relationships (DeGeorge, 1998; Irvine, 2007).
We acknowledge at the outset that efforts to understand social capital among people with disabilities may be complicated by the considerable heterogeneity in disability type, including cause, course, and severity, as well as the wide range of different rehabilitation needs and objectives. In addition, co-occurring individual and structural factors such as gender, age, race/ethnicity, and socioeconomic status may act interactively to enhance or constrain access to social capital (Webber, 2005) and have considerable implications for people’s wellbeing. Nevertheless, living with a disability of any kind has a profound impact on the physical, psychological, and social domains of everyday life and, accordingly, on one’s social capital. This is especially true for individuals with high support needs whose disabilities necessitate greater support in activities of daily living and for whom creating a repository of diverse and meaningful social networks may be a challenge. Disaggregation of data by impairment type is important and future research should assess how the experience of disability is moderated by this and other co-occurring variables, and to what extent these variables can predict levels of social capital. Our aim here, however, is to investigate the overall existing trends in social capital among a diverse group of individuals whose disabilities are sufficiently severe to necessitate ongoing support from social services and/or associations for community living.
### Present study
Professionals who work in the field of rehabilitation have long recognized the importance of the psychosocial aspects of disability and the interaction between individuals and their environments (Kenneth, 2004). However, no known study to date has attempted a comparative analysis of social capital levels between people with disabilities and the general population. In addition, little previous research has examined the sources from which many people with disabilities draw their social capital. Guided by Putnam’s seminal work, the present study sought to provide empirical answers to these questions by exploring broad areas of convergence and difference between the general population and people with disabilities who have high support needs and to provide an assessment of whether and in what ways these two groups differ.
In light of the existing literature on social capital and disability, we expect participants with disabilities to demonstrate greater deficits in their social capital levels compared with the general population. If on the other hand findings reveal that social capital levels are higher or comparable among participants with disabilities, this may be a reflection of the effectiveness of community-based organizations in providing the support and optimism needed to achieve community integration for this population. In addition, we hypothesize that the sources of social capital among people with disabilities, i.e., access to social capital building opportunities, will be primarily family members and/or rehabilitation professionals or other individuals in paid positions of care and support, rather than natural friendships.
### Method
#### Participants
Participants were drawn from a sample of convenience consisting of 218 individuals between the ages of 18 and 80 who have a variety of disabilities and high support needs. All participants were ongoing recipients of programs and/or services including residential, day support, social and recreational, and community support from the Interdependence Network, a group of six disability-based organizations from around the United States and Canada1. Participant demographics are provided in Table 1.
Table 1: Participants
Frequency Percent
Gender
Male 135 62
Female 83 38
Age
18-25 30 13.8
25-29 17 7.8
30-39 39 17.9
40-49 58 26.6
50-59 38 17.4
60-69 6 2.8
70-79 2 0.9
80+ 1 0.5
Missing 27 12.4
Disability Type (more than one may apply)
Intellectual 132 61
Physical 72 33
Other mental health 55 25
Autism 24 11
Hard of hearing 4 2
Blind 4 2
Cerebral Palsy 6 3
Participant characteristics (N = 218)
#### Questionnaire Development
The Harvard Kennedy School’s (HKS; 2006) Social Capital Community Benchmark Survey was used as a proxy to develop the Social Capital Inventory, the first tool created to examine social capital levels among individuals with disabilities (see Appendix A). The Social Capital Community Benchmark Survey was modified in a number of ways in order to facilitate its administration and make it more appropriate for our sample. First, questions with significant overlap were removed to reduce interview time and respondent fatigue. For example, of the 10 questions pertaining to social trust in specific situations and towards specific people, only the general question of whether most people can be trusted was adapted for the present study. Second, questions not relevant to the objectives of the study were also removed (e.g., “Would you like to see spending for public schools increased or decreased?”). Finally, some questions were added in order to better understand the social capital needs of our sample (e.g., “How many of your close friends are paid staff or support professionals?”). The final questionnaire consisted of 65 items relating to six key indicators of social capital as outlined by the Social Capital Community Benchmark Survey: Social Trust, Social Support, Diversity of Friendships, Conventional Politics Participation, Civic/Community Leadership, and Informal Socializing (see Table 2 for a sample of items used from each index). An additional index, Associational Involvement, was excluded from our analysis due to a large portion of missing data.
Response options included 4- or 5-point scales (e.g., For each of the following statements, please tell me whether you strongly agree, agree somewhat, disagree somewhat or strongly disagree); dichotomous responses (e.g., yes or no); and quantitative questions (e.g., How many siblings do you have?).
Table 2: Social Capital Index Measures
Social Capital Index Index Description Sample Questionnaire Items
Social Trust How much one trusts others
• Generally speaking, would you say that most people can be trusted or cannot be trusted?
• Would you say that most people try to be helpful or do you think that they are mostly looking out for themselves?
Social Support The availability of social support systems and where people turn for help
• Do you currently have a partner or spouse?
• Besides your parents, siblings, and children, how many other relatives do you have that you feel close to?
Diversity of Friendships The extent to which social networks are broad and diverse
• Would you say that all of your friends know each?
• Can you count on anyone to provide you with emotional support?
Conventional Politics Participation Involvement in the political process
• Are you currently registered to vote?
• Did you vote in the most recent election?
Civic/Community Leadership Involvement in organized groups, such as sports teams, hobby groups, and religious associations
• Are you a member of the following groups?
• Would you say you attend religious services regularly, often, seldom, almost never, or never?
Informal Socializing Connections developed through informal relationships, such as community activities, employment, and volunteerism
• Please tell us how many times in the past 12 months have you participated in these types of activities?
• How do you typically spend your time during the day?
Questionnaire items were administered in a conversation-style format by trained
agency staff who recorded participants’ responses. To ensure consistency, staff members were provided with interview guidelines that included response wait times and suggested
prompts. Where necessary, assistance was provided in explaining the meaning of
questions and/or breaking them down into smaller segments. However, because participants represented a range of disabilities and levels of ability, it was not possible to completely control the amount of assistance provided by administrators. For example, some terms, such as friend, were defined while others, such as community, were left open to interpretation, possibly influencing participant responses. Interviews took between 45 and 60 minutes to complete. Before the interview, each participant was informed of the purpose of the questionnaire and consent was obtained.
#### Data Analysis
Pearson’s chi-square goodness of fit tests were performed to test for differences between observed frequencies obtained from the present study and expected frequencies found in the HKS survey. Because some of the questionnaire items were modified as mentioned earlier, direct comparison of questions between the present and HKS surveys was not always possible. In such cases, responses were compared with other general population statistics obtained from large, widely recognized, published surveys conducted by Statistics Canada (2008), the Berkeley Longitudinal Study (1972-2010), Pew Research Center (2005, 2007, 2009, 2011), and Roper Center for Public Opinion Research (2005). These surveys gather ongoing data and monitor changes in social trends to better understand the attitudes, values, and behaviours of the general public, and to inform research and social policy issues. Data are collected through random, nationally representative cross-sections of adults aged 18 years or older (except Statistics Canada, whose samples include individuals 15 years and older). Participants are selected through Random Digital Dialing, a process that generates phone numbers randomly based on in-use area codes. Computer-Assisted Telephone Interviewing or face-to-face interviews are employed. Weighting factors are used to ensure the samples are accurately representative of the population. Although little information is provided about sample demographics, there is a possibility that people with disabilities were included to some extent in these samples.
In some cases, questionnaire items were not found in general population surveys and thus could not be compared. Further, statistical data such as means and standard deviations were not always provided by general population surveys, making it impossible to conduct tests of significance. This necessitated some methodological compromise such as omitting a number of questions from our analysis and/or comparing responses from questions that were similar but not matched verbatim. In addition, because the questionnaire was developed for program evaluation purposes, it did include a number of qualitative items. Although we eliminated the majority of such questions from our analysis, some important items were retained and treated descriptively. These comparisons are not offered as evidence of statistical rigor; they do, however, permit exploration of broad areas of convergence and difference between the two sample groups and provide an assessment of the social capital of people with disabilities and whether this differs in any remarkable way from the general population.
Questions corresponded to one or more of the six indices. Responses to questions with 4- and 5-point response options were often combined for better clarity. For example, where values for both strongly agree and agree were high, they were united as a single agree response. Collapsing Likert responses into condensed categories has been cited as appropriate for analysis, particularly when wider scales are used (Brill, 2008). Of the 65 questions, 16 were excluded from our analysis due to significant overlap. An additional 23 questions were removed as these were qualitative and not subject to statistical analysis. Others still could not be matched with general population statistics, as mentioned earlier. Therefore, our final data analysis was based on 26 items.
Presentation of results is organized in accordance with the six social capital indices.
### Results
#### Social Trust
Overall, people with disabilities were significantly more likely to report higher levels of social trust. Seventy-eight percent agreed that most people can be trusted compared with 44% of general population respondents (HKS, 2006), χ2(1, N = 199) = 92.76, p < .001, and 71% agreed that most people are helpful compared with 62% of general population respondents (Pew Research Centre, 2007), χ2(1, N = 211) = 6.38, p < .05. Perceptions of group acceptance were also higher with 80% of people with disabilities agreeing that their neighbourhood is accepting of people with disabilities compared with only 31% of general population respondents who reported feeling there is little or no discrimination against people with disabilities (Roper Centre for Public Opinion Research, 2005). People with disabilities also provided higher neighbourhood ratings with 60% rating their neighbourhood as excellent or very good compared with 39% of general population respondents (HKS, 2006), χ2(3, N = 218) = 46.26, p <.001. However, 39% of people with disabilities reported feeling they have little or no impact on making their community a better place to live, compared with 21% of general population respondents (HKS, 2006), χ2(3, N = 218) = 60.83, p < .001.
#### Social Support
Only 17% of people with disabilities reported having a partner or spouse compared with 62% of general population respondents who reported being married (HKS, 2006), χ2(1, N = 212) = 174.92, p < .001. Only one-fifth of people with disabilities reported having children compared with 71% general population respondents who reported having kids aged six and older (HKS, 2006), χ2(1, N = 193) = 246.79, p < .001. Sixty-nine percent of people with disabilities agreed that parents provide help during illness and over half (53%) agreed that parents help with household tasks and errands. Parental help in these areas was much lower for general population respondents with only 14% relying on parents during illness (Smith, Marsden, & Hout, 2011) and 6% relying on parents for help around the house (Smith et al., 2011). Instead, nearly half (48%) of general population respondents reported relying on their spouse during illness (Smith et al., 2011) and half relied on their spouse for help with household tasks (Smith et al., 2011).
#### Diversity of Friendships
Significant differences were found in reported number of close friends, χ2(3, N = 193) = 57.70, p < .001. As reported in Table 3, a greater number of people with disabilities reported having fewer friends while a greater number of general population respondents reported having more close friends. Further, 42% of people with disabilities identified at least one close friend as being a paid staff or support professional. Significant differences were also found in reported number of friends living in the same community, χ2(6, N = 184) = 136.82 , p < .001. Nearly three times as many people with disabilities (28%) reported having none of their friends living in the same community compared with general population respondents (10%; Statistics Canada, 2008). Results are provided in Table 4.
Table 3: Comparison of Reported Number of Close Friends Between People with Disabilities and General Population Respondents (%)
Number of close friends People With Disabilitiesa General Population Respondents
($n=193$)
None 10 4
1-2 33 18
3-5 14 12
6 or more 21 43
a: Rounded values do not add to 100.
Social networks among people with disabilities were less diverse with 38% reporting that all of their friends already know one another compared with 12% of general population respondents (Smith et al., 2011), χ2(3, N = 183) = 202.63 , p < .001. Similarly, only 17% of people with disabilities reported finding a job through a friend, or a friend of a friend compared with 33% of general population respondents who found work through a friend or an acquaintance (Smith et al., 2011). Instead, people with disabilities were about three times more likely to rely on professional services for finding work (38%) than general population respondents (13%; Smith et al., 2011), χ2(2, N = 146) = 94.11, p < .001.When it comes to emotional support, 90% of people with disabilities reported having someone to count on. However, when asked who was most helpful in providing emotional support, 39% identified paid professionals over a partner, a parent, a sibling, another relative, or a friend. By contrast, only 1% of general population respondents reported turning to professionals when down or depressed (Smith et al., 2011) and 14.3% of general population respondents identified professionals as most helpful when dealing with a major life change (Statistics Canada, 2008). Parents ranked a close second with 38% of people with disabilities turning to them for emotional support, more than 3 times more likely than general population respondents (11%; Smith et al., 2011). Forty-three percent of people with disabilities identified parents as most helpful in providing financial support compared with only 20% of general population respondents who reported turning to parents to borrow a large sum of money (Smith et al., 2011), χ2(2, N = 218) = 85.09, p < .001. Only 8% and 1% of people with disabilities turned to a spouse or partner for emotional and financial support, respectively, compared with 32% and 14% of general population respondents, respectively (Smith et al., 2011).
Table 4: Comparison of Reported Number of Friends Living in the Same Community Between People with Disabilities and General Population Respondents (%)
Number of Friends People With Disabilitiesa General Population Respondents
($n=184$)
None 28 10
1 18 8
2 14 12
3 10 15
4 4 12
5 4 10
6 or more 8 26
a: Rounded values do not add to 100.
#### Conventional Politics Participation
Thirty-nine percent of people with disabilities reported not being registered to vote compared with 19% of general population respondents (HKS, 2006), χ2(1, N = 212) = 50.49, p < .001. Only about one-third (36%) of people with disabilities voted in the last election compared with 74% of general population respondents who did the same (HKS, 2006), χ2(1, N = 195) = 143.21, p < .001.
Participation among people with disabilities across all organized groups was low and ranged from 1-12% compared with a participation range of 10-34% among general population respondents (HKS, 2006). Religious involvement among people with disabilities was also low. Less than one-third (29%) reported attending church services regularly or often compared with nearly half (48%) of general population respondents who attend services every week or more, or almost every week (HKS, 2006), χ2(3, N = 218) = 60.83 , p < .001. Only 11% of people with disabilities reported assuming leading roles within their religious organization (e.g., choir membership) compared with 45% of general population respondents who reported participating in services outside of worship and 79% of general population respondents who reported volunteering at their place of worship (HKS, 2006). Over half (55%) of people with disabilities reported not knowing anyone else or only a few people at religious services.
#### Informal Socializing
People with disabilities were asked how many times in the past 12 months they had participated in a list of informal activities. Participation rates were generally low and ranged from an average of 1-7%, compared with an average range of 2-25% among general population respondents who were asked whether they had participated in any informal activity over the past year (HKS, 2006). Agency staff ranked comparably with friends and other relatives or family as primary activity partner for people with disabilities in a number of informal activities. Results are provided in Table 5.
When asked about how they spend their day, people with disabilities were less likely to be working, with only 25% having either part- or full-time work compared with 62% of general population respondents who reported being employed (HKS, 2006), χ2(2, N = 218) = 368.63 , p < .001. Nearly two-thirds of people with disabilities (62%) reported that they spend their day in either a part- or full-time day or supported employment programs.
Table 5: Summary of Informal Activity Participation Rate and Primary Activity Partner Among People with Disabilities
Respondents who Engaged in Informal Activity with Primary Activity Partner (%)
Activity Other Friends Agency Staff Other Relatives/Family No One Roommates Spouse Co-workers Neighbours Church Members Activity Average Response Rate
Gone out to a restaurant 12.8 24.3 28 1.8 4.6 3.7 1.4 8.51 76.6
Gone to the movies 15.6 24.8 19.7 1.4 8.7 2.8 0.9 8.21 73.9
Been invited to the home of someone else 27.1 4.6 23.9 2.3 1.4 0.5 0.5 0.9 0.5 6.85 61.5
Hung out at a park, mall, or other public space 15.1 22.9 10.6 3.7 6 1.8 0.9 6.77 61
Had people over to your home 22 5 23.4 1.8 1.4 1.4 1.4 0.9 6.36 57.3
Entertained people in your home 20.6 5.5 19.3 3.2 1.8 1.4 0.5 0.9 5.91 53.2
Gone Bowling 12.8 20.2 4.1 1.8 6.4 0.9 0.5 5.18 46.8
Used the Internet 5.5 13.8 4.6 15.6 0.5 0.5 0.9 4.60 41.3
Played cards with others 7.8 15.6 8.3 0.9 6.9 0.5 0.9 4.54 40.8
Socialized with people outside of work 17 5.5 4.6 1.8 2.8 7.3 1.4 4.48 40.4
Gone to a health club or exercised 7.3 17.4 1.8 7.3 2.8 1.4 4.22 38.1
Gone to a museum 5.5 11 6.4 2.8 3.2 2.3 0.9 3.56 32.1
Played a team sport 13.3 6.9 1.4 2.3 1.4 0.9 0.5 2.96 26.6
Gone to a bar or tavern 9.6 3.7 4.1 3.2 0.5 2.3 0.5 2.65 23.9
Attended any public meetings on social issues 2.3 5.5 2.3 5 0.5 0.5 0.9 1.88 17
Average of all activities 12.95 12.44 10.83 3.66 3.10 1.26 1.22 0.31 0.24 5.11 46.03
Note: Dash (—) indicates responses where data were not reported.
#### Health and Life Satisfaction
Although not constituting their own index, health and life satisfaction ratings were measured as extensive research has documented a strong relationship between social capital and physical and mental health. Significant differences were found in the distribution of health ratings between people with disabilities and general population respondents, χ2(3, N = 189) = 17.81, p < .001 (see Table 6). A greater number of general population respondents reported their health as excellent or very good (55%; HKS, 2006) compared with people with disabilities (40%), although more people with disabilities rated their health as good (40%) compared with general population respondents (28%). Overall, however, combined health ratings of good or better appear to be comparable between people with disabilities (80%) and general population (83%) respondents. Life satisfaction ratings were also significantly different with 93% of people with disabilities reporting they are quite happy or very happy compared with 83% of general population respondents who reported a life satisfaction rating of 7 or higher on a 10-point scale (HKS, 2006), χ2(1, N = 191) = 12.66, p < .001.
Table 6: Comparison of Health Ratings Between People With Disabilities and General Population Respondents (%)
State of Health People With Disabilities General Population Respondents
($n=189$)
Excellent/Very good 40 55
Good 40 28
Fair 14 12
Poor/Very poor 6 5
### Discussion
Survey comparisons indicate that (a) social capital levels among people with disabilities tend to be lower than that of general population respondents, and (b) in cases where levels of social capital are consistent with, or higher than, levels found among general population respondents, this may be reflective of (c) an incongruity between subjective evaluations and objective reports, or (d) support received from non-normative sources. This section reviews findings of particular interest, explores possible explanations and considers the clinical implications of our results.
#### (a) People with disabilities tend to have lower levels of social capital
Overall, people with disabilities show a marked disconnect from a number of social institutions including marriage, parenthood, religious organizations, employment and politics. Low engagement in these areas has removed such sources as important potential agents of social support and as facilitating community integration. These findings are consistent with previous research showing that people with disabilities are less likely to marry and have a family life (Beber & Biswas, 2009; Sheppard-Jones, Prout, Kleinert, & Taylor, 2005) and receive less support and companionship from family members and friends than individuals without disabilities (Rosen & Burchard, 1990). People with disabilities also tend to have fewer close friends, and are less likely to participate in both formal and informal activities. This is in line with previous work showing that people with disabilities are less involved in community groups and that leisure activities tend to be solitary (Verdonschot, de Witte, Reichrath, Buntinx, & Curfs, 2009). This lack of involvement is particularly discouraging among religious institutions that have historically encouraged the integration of different groups (McNair & Smith, 1998) and that, apart from worship, often entail participation in some form of religious community (Putnam, 2000; Stone, Cross, Purvis, & Young, 2003). Similarly, the workplace has traditionally been viewed as providing opportunities to create and build social ties with coworkers (Shooshtari, Naghipur, & Zhang, 2012). Respondents with disabilities, however, are less likely to be employed and this follows a large body of evidence showing that a disproportionate number of people with disabilities are either under- or unemployed (Burkhauser & Stapleton, 2004; Dyda, 2008; Levy & Hernandez, 2009; Verdonschot et al, 2009). In addition, research has shown that people with disabilities tend to have lower labour force participation rates, devote less time to market work, and suffer greater earnings penalties (Benoit, Jansson, Jansenberger, & Phillips, 2013; Brown & Emery, 2010; Kelly, 2013; Pagan, 2013). Limited social connections therefore further hinder the likelihood of employment and remove this venue as an opportunity to further develop social capital.
Participation in the political process is another important measure of how involved we are in our communities. Political engagement provides an opportunity for individuals with disabilities to not only endorse candidates who are sympathetic to their cause, but also to form connections through their affiliation with political parties. People with disabilities, however, tend not to be politically involved. An under-representation of individuals with disabilities at the polls is not uncommon and may be due to a number of factors including a lack of understanding of the political process, difficulty accessing the polls or participating in door-to-door campaigning, or a general disinterest in politics (Bell, McKay, & Phillips, 2001; Keeley, Redley, Holland, & Clare, 2008; Pavey, 2003).
#### (b) In few cases, people with disabilities report higher than expected levels of social capital
Other findings, however, are encouraging and point to higher than expected levels
of social capital among people with disabilities. Neighbourhood ratings along with perceptions of group acceptance are higher among people with disabilities and the majority report having at least one close friend. Most indicate they have someone to rely on for emotional, financial, and instrumental support and also report comparable ratings of general life satisfaction and overall health. Comparable health ratings are particularly noteworthy as we would expect such evaluations to be much lower among our sample. Research has found disability status to be highly influential in how people think about and construct their health and health-related quality of life with poorer self-rated physical and mental health often reported by people with disabilities (Drum, Horner-Johnson, & Krahn, 2008).
Equally interesting is the finding of higher social trust among people with disabilities. Given the systemic maltreatment individuals with disabilities have historically experienced, particularly by those in positions of power or authority (Rossiter & Clarkson, 2013; Simpson, 2007; Sobsey, 1994; Stewart & Russell, 2001; Stroman, 2002), we expected participants to report lower levels of social trust. However, institutions where such treatment took place have now been replaced by community-based organizations geared towards social justice and inclusion, and can be viewed as safe, alternative settings that differentiate themselves from larger society’s values, attitudes and behaviour towards disability.
#### (c) However, in such cases, results are likely explained by an incongruity between subjective evaluations and objective reports
Although people with disabilities report having fewer close friends and are far more likely to have none of their close friends living in the same community, in many areas they nonetheless report higher than expected subjective evaluations of social capital. This may be explained by the particular settings in which many participants spend their time. Nearly two-thirds report attending part- or full-time day support or supported employment programs and this is likely where many of their relationships are formed, primarily with peers and agency staff. Therefore, for many, their psychological sense of community corresponds to and extends from such settings and/or groups of individuals that have proven to be trustworthy, and thus may not represent an accurate depiction of broader society. Relatedly, the concept of naïve optimism may help explain higher than expected ratings of life satisfaction and general health. Naïve optimism refers to an overly simplistic and trusting view of the world that often results in a biased interpretation of reality (Epstein & Meier, 1989). Because individuals with cognitive impairments were overrepresented in our sample, comparable ratings here may be the result of naïve optimism and stem from participants not being fully aware of the long-term health and social complexities associated with their conditions.
#### (d) or, non-normative sources of support
Survey results also make clear that, compared with general population respondents, the sources from which people with disabilities derive their social support are non-normative. General population respondents are most likely to locate emotional, financial and instrumental support in marriage and partnership. Indeed, as we move through life, our primary source of support is often a spouse or partner (Peters, 2008). People with disabilities, however, report lower rates of marriage and partnership and therefore lack these key providers of informal care (Ashman, Hulme, & Suttie, 1990). Instead, parents and paid professional staff appear to dominate this area of social capital. Research shows that social support for people with disabilities is most often provided by family members (Lippold & Burns, 2009) and that aging parents commonly remain the primary caregivers throughout life (Kropf, 1997; Shooshtari, 2012). Although not a variable in the present study, the role of adult siblings is also conceptualized as one of primary caregiver, particularly after parents pass on or are no longer able to provide care (Atkin & Tozer, 2014; Egan & Walsh, 2001; Heller & Arnold, 2010). However, sibling roles and relationships are varied and research has found mixed results over how these responsibilities are negotiated over time and across the life span. While some research has found a generally positive life impact of people with disabilities on their adult siblings (Heller & Arnold, 2010), other studies reveal a negative impact with nondisabled siblings reporting concerns about the expectation of future caregiving and significant stress over how to fulfill other social and family obligations alongside their sense of duty to support a brother or sister with a disability (Davys, Mitchell, & Haigh, 2011; Heller & Kramer, 2009; Tozer, Atkin, & Wenham, 2013). This may help explain why people with disabilities most frequently identify staff members as providers of emotional support and often perceive staff as central to their social support networks, and even their friendships (Antaki, Finlay, & Walton, 2007; Lippold & Burns, 2009). Indeed, according to Taylor and Bogdan (1989), friendships among individuals with disabilities often emerge out of an earlier professional or caring relationship.
Although parents and professionals are traditionally atypical sources of support for adults, this study does make clear that these individuals fill an obvious and important gap in the lives of people with disabilities. Our findings speak to the success of social programs such as those offered by the Interdependence Network agencies that clearly account for a considerable part of the creation of social capital and its beneficial effects. Indeed, secondary supports such as these have been shown to provide a protective function even in the absence of primary ties (Syrotuik & D’Arcy, 1984) with some (West, Kregel, Hernandez, & Hock, 1997) arguing that professional support can in fact enhance one’s abilities to fulfill social needs.
It is important to note, however, that the quality of relationships formed with professionals may be overestimated by individuals with disabilities and falsely perceived as true friendships (Green & Schleien, 1991). Although agency staff, attendants, and other service providers are often identified as friends, there are typically qualitative differences in the nature of these relationships as they tend to evolve out of feelings of obligation and may involve a lower level of social engagement on the part of the professional (Irvine, 2007; Lippold & Burns, 2009). In addition, agency policies are often designed to protect employees’ confidentiality (Runnion & Wolfer, 2004) and may discourage social interactions between staff members and clients outside of agency settings and, in some cases, even between clients themselves. Further, these support systems tend to be fluid; continuously decreasing government funding means that professional supports are not sustainable, long-term solutions. Indeed, agency staff and other professionals are temporary figures that often come and go over time. Parents, too, age and eventually pass on, often leaving adults with disabilities with poor informal networks (Krauss, Seltzer, & Goodman, 1992). Though no less supportive, parental and professional ties are removed from traditional sources of support and depart from the natural evolution most of us undergo as we progress through life. Our research supports this concern as over two-thirds of our sample are over the age of 30, but for the most part, have not moved on to replace parents and professionals with a life partner.
A key factor in successful social integration is the encouragement of diverse friendships between people with and without disabilities (Ager, Myers, Kerr, Myles, & Green, 2001). Day support and supported employment programs where many people with disabilities spend their time tend to be highly homogeneous and are designed almost exclusively for people with intellectual disability. Thus opportunities for establishing diverse social connections may be limited to the peers and support staff they meet in these programs. Indeed, people with disabilities report that many of their friends already know one another and this is consistent with previous research showing that participation in social activities among people with disabilities is more common with others who also have a disability (Emerson & McVilly, 2004). Agency staff also contributes significantly as an activity partner in a number of informal activities, ranking comparably alongside family and other relatives, and friends. Previous research shows that people with disabilities are often accompanied in an activity by training or therapeutic staff (Verdonschot et al., 2009) and that staff is often instrumental in organizing participation in social activities (Todd, 2000). Although there was no general population comparison for this question, it is widely accepted that the general population does not partake in social activities with professionals but rather with family members and friends. Thus, our results support the notion that people with disabilities have restricted social networks and may be developing few relationships with nondisabled individuals who are not relatives and who are not paid to support them.
### Conclusion
Social connectedness matters to our lives in the most profound way; the lack of meaningful connections with others is often a significant source of suffering (Peters, 2008). This study aimed to fill an important gap in the literature by reaching beyond anecdote to answer empirically the question of social capital among people with disabilities. Our findings point to appreciable differences in social capital among these individuals compared with the general population as well as among the sources from which their social capital is drawn.
The present study had a number of methodological challenges as addressed earlier, and was also limited by low response rates in some cases and by the retrospective nature of the questionnaire. In particular, response rates in Table 5 (informal activity participation rate and primary activity partner) varied widely and may indicate a generally low level of participation among respondents for some activities. For example, when asked about attended any public meetings on local issues, a response rate of 17% likely indicates that this is an activity in which most respondents simply do not partake. Although this made it difficult for us to draw significant conclusions, it does make a powerful statement about the extent to which respondents are active in their communities and engaged in common activities. Also important to note is that difficulties in response rates, response bias, reliability, and validity are not uncommon when conducting research with people with intellectual disability (Finlay & Lyons, 2002; Heal & Sigelman, 1995) and response rates have been found to be markedly lower among participants with moderate to profound intellectual disability (Hartley & MacLean, 2006). The variability in response rates may therefore be attributed to the large number of respondents with intellectual disability in our sample (see participant demographics in Table 1). These, and others, are inherent challenges associated with conducting research with this population. Nevertheless, the results reported here are meaningful in constructing an understanding of the social capital experiences of our sample. Equally important is allowing participants with disabilities a more active role in research by capturing their own subjective views, rather than relying on informant reports and observational ratings as has been the case historically (Beart, Hardy, & Buchan, 2004; Schalock et al., 2002). Participation in research can be viewed as a form of self-advocacy as it provides opportunities for people with disabilities to speak and stand up for themselves, stand up for their rights, make choices, and be independent (People First, 1996). Engaging in self-advocacy has also been found to impact individuals with disabilities by contributing positively to their confidence and self-concept (Beart, Hardy, & Buchan, 2004).
It is important to note that the participants in this study were identified through their affiliation as service recipients of one of the six community-based disability agencies and thus represent a small and proactive subset of individuals living with disability who have successfully connected with community living organizations. Further, they tend to live in large, urban, and progressive cities where formal support services for people with disabilities have traditionally been available. Ashman, Hulme, and Suttie (1990) found notable differences in community members’ access to and use of facilities and social programs between rural and urban regions. Thus, circumstances are likely substantially different for people with disabilities who reside in more rural areas without access to services, and who likely spend the majority of their time at home with parents or other non-normative figures. Though these populations are often difficult to reach for research purposes, we expect their social capital levels to be lower than those revealed by this study, and their sources of social capital to consist primarily of family members in the absence of support gained through affiliations with community programs. In addition, because our sample was one of convenience rather than a random sample as is used in the comparative data, there is the possibility that other sociodemographic variables that were not taken into account could have influenced differences in social capital.
If it is accepted that the experience of disability rests on the relationship between the individual and the social environment, then a continued focus should be placed on rehabilitation practices that encourage and support community engagement for people with disabilities. Our findings provide a good starting point for comparative future research in this area as well as an informed direction for professionals working in the field. Our hope is that the concept of social capital will continue to appear in contemporary discourse about how best to encourage and support individuals with disabilities in their search for ways to connect meaningfully with others in their communities.
### References
• Ager, A., Myers, F., Kerr, P., Myles, S., & Green, A. (2001). Moving home: Social integration for adults with intellectual disabilities resettling into community provision. Journal of Applied Research in Intellectual Disabilities, 14(4), 392-400. doi: 10.1046/j.1468-3148.2001.00082.x
• Antaki, C., Finlay, W.M.L., & Walton, C. (2007). The staff are your friends: Intellectually disabled identities in official discourse and interactional practice. British Journal of Social Psychology, 46(1), 1-18. doi: 10.1348/014466606X94437
• Appel, L., Dadlani, P., Dwyer, M., Hampton, K., Kitzie, V., et al. (2014). Testing the validity of social capital measures in the study of information and communication technologies. Information, Communication & Society, 17(4), 398-416.
• Ashman, A.F., Hulme, P., & Suttie, J. (1990). The life circumstances of aged people with an intellectual disability. Australia & New Zealand Journal of Developmental Disabilities, 16(4), 335-347.
• Atkin, K., & Tozer, R. (2014). Personalization, family relationships and autism: Conceptualising the role of adult siblings. Journal of Social Work, 14(3), 225-242.
• Barnes, C., & Sheldon, A. (2010). Disability, politics and poverty in a majority world context. Disability & Society, 25(7), 771-782.
• Beart, S., Hardy, G., & Buchan, L. (2004). Changing selves: A grounded theory account of belonging to a self-advocacy group for people with intellectual disabilities. Journal of Applied Research in Intellectual Disabilities, 17, 91-101.
• Beber, E., & Biswas, A.B. (2009). Marriage and family life in people with developmental disability. International Journal of Culture and Mental Health, 2(2), 102-108. doi: 10.1080/17447140903205317
• Bell, D.M., McKay, C., & Phillips, K.J. (2001). Overcoming the barriers to voting experienced by people with learning disabilities. British Journal of Learning Disabilities, 29(4), 122-127. doi: 10.1046/j.1468-3156.2001.00127.x
• Benedet, J., & Grant, I. (2014). Sexual assault and the meaning of power and authority for women with mental disabilities. Feminist Legal Studies, 22, 131- 154.
• Benoit, C., Jansson, M., Jansenberger, M., & Phillips, R. (2013). Disability stigmatization as a barrier to employment equity for legally-blind Canadians. Disability & Society, 28(7), 970-983.
• Bourdieu, P. (1986). The forms of capital. In J. G. Richardson (Ed.), Handbook of Theory and Research for the Sociology of Education. New York: Greenwood.
• Bramston, P., Bruggerman, K., & Pretty, G. (2002). Community perspectives and subjective quality of life. International Journal of Disability, Development and Education, 49, 385-397.
• Bricher, G. (2000). Disabled people, health professionals and the social model of disability: Can there be a research relationship? Disability & Society 15(5), 781-793. doi: 10.1080/713662004
• Brill, J. (2008). Likert scale. In P. Lavrakas (Ed.), Encyclopedia of survey research methods (pp. 428-430). Thousand Oaks, CA: Sage Publications, Inc.
• Brisson, D., Roll, S., & East, J. (2009). Race and ethnicity as moderators of neighbor- hood bonding social capital: Effects on employment outcomes for families living in low-income neighborhoods. Families in Society, 90(4), 368–74.
• Brown, C. L., & Emery, J. C. H. (2010). The impact of disability on earnings and labour force participation in Canada: Evidence from the 2001 PALS and from Canadian case law. Journal of Legal Economics, 16(2), 19-59.
• Burkhauser, R.V., & Stapleton, D.C. (2004). The decline in the employment rate for people with disabilities: Bad data, bad health, or bad policy? Journal of Vocational Rehabilitation, 20(3),185-201.
• Cohen, S. (2004). Social relationships and health. American Psychologist, 59(8), 676-684. doi: 10.1037/0003-066X.59.8.676
• Coleman, J. (1988). Social capital in the creation of human capital. American Journal of Sociology, 94(1), 95–120.
• Condeluci, A., Ledbetter, M.G., Ortman, D., Fromknecht, J., DeFries, M. (2008). Social capital: A view from the field. Journal of Vocational Rehabilitation, 29(3), 133- 139.
• Davys, D., Mitchell, D., & Haigh, C. (2011). Adult sibling experiences, roles, relationships and future concerns – A review of the literature in learning disabilities. Journal of Clinical Nursing, 20(19-20), 2837-2852.
• d’Hombres, B., Rocco, L., Suhrcke, M., & McKee, M. (2011). Does social capital determine health? Evidence from eight transition countries. Health Economics, 19(1), 56-74. doi: 10.1002/hec.1445
• DeGeorge, K.L. (1998). Friendships and stories: Using children’s literature to teach friendship skills to children with learning disabilities. Intervention in School and Clinic, 33(3), 157-162. doi: 10.1177/105345129803300304
• Donoghue, C. (2003). Challenging the authority of the medical definition of disability: An analysis of the resistance to the social constructionist paradigm. Disability & Society, 18(2), 199-208. doi: 10.1080/0968759032000052833
• Drum, C. E., Horner-Johnson, W., & Krahn, G. L. (2008). Self-rated health and healthy days: Examining the “disability paradox.” Disability and Health Journal, 1(2), 71-78.
• Dyda, D. (2008). Jobs change lives: Social capital and shared value exchange. Journal of Vocational Rehabilitation, 29(3), 147-56.
• Egan, J., & Walsh, P. N. (2001). Sources of stress among adult siblings of Irish people with intellectual disability. The Irish Journal of Psychology, 22(1), 28-38.
• Emerson, E., McVilly, K. (2004). Friendship activities of adults with intellectual disabilities in supported accommodation in northern England. Journal of Applied Research in Intellectual Disabilities, 17(3), 191-197. doi: 10.1111/j.1468-3148.2004.00198.x
• Epstein, S., & Meier, P. (1989). Constructive thinking: A broad coping variable with specific components. Journal of Personality and Social Psychology, 57(2), 332–350. doi: 10.1037/0022-3514.57.2.332
• Finlay, W. M., & Lyons, E. (2002). Acquiescence in interviews with people who have mental retardation. Mental Retardation, 40, 14-29.
• Flaherty, Patti. (2008). Social capital and its relevance in brain injury rehabilitation services. Journal of Vocational Rehabilitation, 29(3), 141-146.
• Folland, S. (2007). Does “community social capital” contribute to population health? Social Science & Medicine 64(11), 2342-2354. doi:10.1016/j.socscimed.2007.03.003
• Fowler, K., Wareham-Fowler, S., & Barnes, C. (2013). Social context and depression severity and duration in Canadian men and women: Exploring the influence of social support and sense of community belongingness. Journal of Applied Social Psychology, 43, E85- E96.
• Gambrill, E., & Paquin, G. (1992). Neighbours: A neglected resource. Children and Youth Services Review, 14(3-4), 253-272. doi: 10.1016/0190-7409(92)90030-Y
• Galer, D. (2014). “A place to work like any other?” Sheltered workshops in Canada, 1970-1985. Canadian Journal of Disability Studies, 3(2), 1-30.
• Geisthardt, C.L., Brotherson, M.J., & Cook, C. (2002). Friendships of children with disabilities in the home environment. Education & Training in Mental Retardation & Developmental Disabilities, 37(3), 235-252.
• Green, F., & Schleien, S. (1991). Understanding friendship and recreation: A theoretical sampling. Therapeutic Recreation Journal, 25(4), 29-40.
• Hampton, K., Goulet, L.S., Her, E.J., Rainie, L. (2009). Social isolation and new technology. Retrieved from Pew Research Centre website http://www.pewinternet.org/Reports/2009/18--Social-Isolation-and-New%09Technology/Part-3-Network-Diversity-and-Community/2-Are-internet-users-less-%09%09likely-to-participate-in-the-local-community.aspx
• Hampton, K., Goulet L.S., Rainie, L., Purcell, K. (2011). Social networking sites and our lives. Retrieved from Pew Research Centre website: http://www.pewinternet.org/Reports/2011/Technology-and-social-networks/Part-4/Trust.aspx
• Hartley, L. S., & MacLean, E. W. (2006). A review of the reliability and validity of Likert-type scales for people with intellectual disability. Journal of Intellectual Disability Research, 50(11), 813-827.
• Harvard Kennedy School. (2006). Social Capital Community Benchmark Survey [Data file]. Retrieved from http://www.hks.harvard.edu/saguaro/pdfs/2006SCCSbanner.pdf
• Hawe, P., & Shiell, A. (2000). Social capital and health promotion: A review. Social Science & Medicine, 51(6), 871-885. doi: 10.1016/S0277-9536(00)00067-8
• Hawkins, R. L., & Maurer, K. (2012). Unravelling social capital: Disentangling a concept for social work. British Journal of Social Work, 42, 353-370.
• Heal, L. W., & Sigelman, C. K. (1995). Response biases in interviews of individuals with limited mental ability. Journal of Intellectual Disability Research 39, 331- 40.
• Heiman, T. (2000). Quality and quantity of friendship: Student and teacher perceptions. School Psychology International, 21(3), 265-280. doi:10.1177/0143034300213004
• Heller, T., & Arnold, C. K. (2010). Siblings of adults with developmental disabilities: Psychosocial outcomes, relationships, and future planning. Journal of Policy and Practice in Intellectual Disabilities, 7(1), 16-25.
• Heller, T., & Kramer, J. (2009). Involvement of adult siblings of persons with developmental disabilities in future planning. Intellectual and Developmental Disabilities, 47(3), 208-219.
• Hubbard, S. (2004). Disability studies and health care curriculum: The great divide. Journal of Allied Health, 33(3),184-188.
• Irvine, A.N. (2007). Social relationships from the perspective of persons with developmental disabilities, their family members, educators, and employers (Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses Database. (AAINR32981)
• Isaac, R., Raja, B.W.D., & Ravanan, M. P. (2010). Integrating people with disabilities: Their right – our responsibility. Disability & Society, 25(5), 627-630.
• Kampert, A. L., & Goreczny, A. J. (2007). Community involvement and socialization among individuals with mental retardation. Research in Developmental Disabilities, 28, 278-286.
• Kanazawa, S., & Savage, J. (2009). Why nobody seems to know what exactly social capital is. The Journal of Social, Evolutionary, and Cultural Psychology, 3(2), 118-132.
• Karp, D.A. (1994). The dialectics of depression. Symbolic Interaction, 17(4), 341-366. doi: 10.1525/si.1994.17.4.341
• Kawachi, I., & Berkman, L. F. (2001). Social ties and mental health. Journal of Urban Health: Bulletin of the New York Academy of Medicine, 78(3), 458-467.
• Keeley, L.H., Redley, M., Holland, A.J., & Clare, I.C.H. (2008). Participation in the 2005 general election by adults with intellectual disabilities. Journal of Intellectual Disability Research, 52(3), 175-181. doi: 10.1111/j.1365-2788.2007.00991.x
• Kelly, S. M. (2013). Labor force participation rates among working-age individuals with visual impairments. Journal of Visual Impairment & Blindness, 107(6), 509-513.
• Kenneth, T. R. (2004). Old wine in a slightly cracked new bottle. American Psychologist 59(4), 274-275.
• Kinne, S., Patrick, D. L., & Doyle, D. L. (2004). Prevalence of secondary conditions among people with disabilities. American Journal of Public Health, 94, 443-445.
• Krauss, M.W., Seltzer, M.M., & Goodman, S.J. (1992). Social support networks of adults with mental retardation who live at home. American Journal on Mental Retardation, 96(4), 432-441.
• Kroll, C. (2011). Different things make different people happy: Examining social capital and subjective well-being by gender and parental status. Social Indicators Research, 104(1),157-177. doi: 10.1007/s11205-010-9733-1
• Kropf, N.P. (1997). Older parents of adults with developmental disabilities: Practice issues and service needs. Journal of Family Psychotherapy, 8(2), 37-54. doi: 10.1300/J085V08N02_04
• Kulkarni, M. (2012). Social networks and career advancement of people with disabilities. Human Resource Development Review, 11(2), 138-155. doi: 10.1177/1534484312438216
• Lau, K., Chow, S.M.K., & Lo, S.K. (2006) Parents’ perception of the quality of life of preschool children at risk or having developmental disabilities. Quality of Life Research: An International Journal of Quality of Life Aspects of Treatment, Care & Rehabilitation, 15(7), 1133-1141.
• Levy, J.M., & Hernandez, B. (2009). Employment and people with disabilities. Journal of Social Work in Disability & Rehabilitation, 8(3-4), 99-101. doi: 10.1080/15367100903200353
• Lin, N. (2001) Social capital: A theory of social structure and action. Cambridge: Cambridge University Press.
• Lippold, T., & Burns, J. (2009). Social support and intellectual disabilities: A comparison between social networks of adults with intellectual disability and those with physical disability. Journal of Intellectual Disability Research, 53(5), 463-473. doi: 10.1111/j.1365-2788.2009.01170.x
• Lovell, S. A., Gray, A. R., & Boucher, S. E. (2015). Developing and validating a measure of community capacity: Why volunteers make the best neighbours. Social Science & Medicine, 133, 261-268.
• McNair, J., & Smith, H.K. (1998). Community-based natural support through local churches. Mental Retardation, 36(3), 237-241. doi: 10.1352/00476765(1998)036<0237:CNSTLC>2.0.CO;2
• Merriam, S. B., & Kee, Y. (2014). Promoting community wellbeing: The case for lifelong learning for older adults. Adult Education Quarterly, 64(2), 128-144.
• Mithen, J., Aitken, J., Ziersch, A., & Kavanagh, A. N. (2015). Inequalities in social capital and health between people with and without disabilities. Social Science & Medicine, 126, 26-35.
• Morris, J. (2001). That kind of life? Social exclusion and young disabled people with high levels of support needs. London, England: Scope.
• Murayama, H., Fujiwara, Y., & Kawachi, I. (2012). Social capital and health: A review of prospective multilevel studies. Journal of Epidemiology, 22, 179-187.
• National Organization on Disability. (2010). Harris Survey of Americans with Disabilities. New York, NY. Retrieved from http://www.2010disabilitysurveys.org
• O’Brien, K. (2012). Review of the social cure: Identity, health and wellbeing. The Australian Educational and Developmental Psychologist, 29(1), 79-80.
• O’Brien, J., & O’Brien, C.L. (1993). Unlikely alliances: Friendships and people with developmental disabilities. In A.R. Novak Amado (Ed.), Friendships and community connections between people with and without developmental disabilities (pp. 9-39). Baltimore, MD: Paul H. Brookes Publishing.
• Orne, M.T. (2002). On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications. Prevention & Treatment, 35, 1-11. doi: 10.1037/1522-3736.5.1.535a
• Pagan, R. (2013). Time allocation of disabled individuals. Social Science & Medicine, 84, 80-93.
• Parris, A.N., & Granger, T.A. (2008). The power and relativity of social capital. Journal of Vocational Rehabilitation, 29(3), 165-71.
• Partington, K. (2005). What do we mean by our community? Journal of Intellectual Disabilities, 9(3), 241-251. doi: 10.1177/1744629505056697
• Pavey, B. (2003). Citizenship and special educational needs: What are you going to do about teaching them to vote? Support for Learning, 18(2), 58-65. doi: 10.1111/1467-9604.00281
• People First Workers. (1996). Speaking out for equal rights workbook 2, equal people course books. Buckingham: Open University, People First & MENCAP.
• Peters, T. (2008). The relationship of self-disclosure and perceived emotional support to adult wellbeing (Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses database. (AAI3333425)
• Pew Research Centre. (2005). Pew Research Centre for the People & the Press. Retrieved from: http://www.people-press.org/2005/04/19/additional-findings-and-analyses/
• Pew Research Centre. (2007). Pew Research Social & Demographic Trends [Data file]. Retrieved from: http://www.pewsocialtrends.org/2007/02/22/americans-and-social-trust-who-where-and-why/
• Poortinga, W. (2006). Social relations or social capital? Individual and community health effects of bonding social capital. Social Science & Medicine, 63(1), 255-270.
• Potts, B. (2005). Disability and employment: Considering the importance of social capital. Journal of Rehabilitation, 71(3), 20-25.
• Putnam, R. (1995). Tuning in, tuning out: The strange disappearance of social capital in America. Political Science & Politics, 28(4), 664-683. Retrieved from http://dx.doi.org/10.2307/420517
• Putnam, R. (2000). Bowling alone: The collapse and revival of American community. New York, NY: Simon & Schuster.
• Rocco, L., & Suhrcke, M. (2012). Is social capital good for health? A European perspective. Copenhagen: WHO Regional Office for Europe. Retrieved from http://www.euro.who.int/__data/assets/pdf_file/0005/170078/Is-Social-Capital-good-for-your-health.pdf
• Roper Centre for Public Opinion Research. (2005). Taking America’s Pulse III – Intergroup Relations Survey [Data file]. Retrieved from University of Connecticut, iPOLL Databank: http://www.ropercenter.uconn.edu/data_access/ipoll/ipoll.html
• Rosen, J.W., & Burchard, S.N. (1990). Community activities and social support networks: A social comparison of adults with and adults without mental retardation. Education & Training in Mental Retardation, 25(2), 193-204.
• Rosnow, R.L. (2002). The nature and role of demand characteristics in scientific inquiry. Prevention & Treatment, 5(1), 1-7. doi: 10.1037/1522-3736.5.1.537c
• Rossiter, K., & Clarkson, A. (2013). A social history of Huronia Regional Centre. Canadian Journal of Disability Studies, 2(3), 1-30.
• Runnion, V.M., & Wolfer, T.A. (2004). Relationship disruption in adults with cognitive disabilities. Families in Society, 85(2), 205-213.
• Saltkjel, T., & Malmberg-Heimonen, I. (2014). Social inequalities, social trust and civic participation – the case of Norway. European Journal of Social Work, 17(1), 118- 134.
• Scheffler, R.M., Brown, T.T., & Rice, J.K. (2007). The role of social capital in reducing non-specific psychological distress: The importance of controlling for omitted variable bias. Social Science & Medicine, 65(4), 842-854. doi: 10.1016/j.socscimed.2007.03.042
• Schleien, S.J., Heyne, L.A., Rynders, J.E., & McAvoy, L.H. (1990). Equity and excellence: Serving all children in community recreation. Journal of Physical Education, Recreation, and Dance, 61(8), 45-48.
• Schuller, T. (2007). Reflections on the use of social capital. Review of Social Economy, 65(1), 11-28.
• Shalock, R. L., Brown, I., Brown, R., Cummins, R. A., Felce, D., Matikka, K. D., Keith, K. D., & Parmenter, T. (2002). Conceptualizations, measurement, and application of quality of life for persons with intellectual disabilities: Report of an international panel of experts. Mental Retardation, 40, 457-70.
• Sheppard-Jones, K.H., Prout, T., Kleinert, H., & Taylor, S.J. (2005). Quality of life dimensions for adults with developmental disabilities: A comparative study. Mental Retardation, 43(4), 281-291. doi: 10.1352/0047-6765(2005)43[281:QOLDFA]2.0.CO;2
• Shooshtari, S., Naghipur, S., & Zhang, J. (2012). Unmet healthcare and social services needs of older Canadian adults with developmental disabilities. Journal of Policy and Practice in Intellectual Disabilities, 9(2), 81-91. doi: 10.1111/j.1741-1130.2012.00346.x
• Simpson, M. I. (2007). From savage to citizen: Education, colonialism and idiocy. British Journal of Sociology of Education, 28(5), 561-574.
• Smith, T.W., Marsden, P.V., Hout, M. (2011). General Social Survey, 1972-2010 [Cumulative data file]. Retrieved from University of Connecticut/Ann Arbor (US), UC Berkeley website: http://sda.berkeley.edu/cgi-bin/hsda?harcsda+gss10
• Sobsey, D. (1994). Violence and abuse in the lives of people with disabilities: The end of silent acceptance? Baltimore, MD: Paul H. Brooks.
• Stainback, W., & Stainback, S. (1987). Facilitating friendships. Education and Training in Mental Retardation, 22(1), 18-25.
• Statistics Canada, Social and Aboriginal Statistics Division. (2008). General Social Survey, Cycle 22 – Social Engagement [Data file]. Retrieved from http://sda.chass.utoronto.ca/sdaweb/html/gss.htm.
• Stewart, N. (1985). Winning friends at work. New York, NY: Ballantine Books.
• Stewart, J., & Russell, M. (2001). Disablement, prison, and historical segregation. Disablement, Prison, and Historical Segregation, 53(3), 1-7.
• Stone, H.W., Cross, D.R., Purvis, K.B., Young, M.J. (2003). A study of the benefit of social and religious support on church members during times of crisis. Pastoral Psychology, 51(4), 327-340. doi:10.1023/A:1022537400283
• Stroman, D. F. (2002). The disability rights movement: From deinstitutionalization to self-determination. University Press of America.
• Svendsen, G., & Sorensen, J.F.L. (2006). The socioeconomic power of social capital: A double test of Putnam’s civic society argument. International Journal of Sociology and Social Policy, 26(9/10), 411-429. doi: 10.1108/01443330610690550
• Syrotuik, J., & D’Arcy C. (1984). Social support and mental health: Direct, protective and compensatory effects. Social Science & Medicine, 18(3), 229-236. doi: 10.1016/0277-9536(84)90084-4
• Taylor, S.J., & Bogdan, R. (1989). On accepting relationships between people with mental retardation and non-disabled people: Towards an understanding of acceptance. Disability, Handicap & Society, 4(1), 21-36. doi: 10.1080/02674648966780021
• Todd, S. (2000). Working in the public and private domains: Staff management of community activities for and the identities of people with intellectual disability. Journal of Intellectual Disability Research, 44(5), 600-620. doi:10.1046/j.13652788.2000.00281.x
• Tozen, R., Atkin, K., & Wenham, A. (2013). Continuity, commitment and context: Adults siblings of people with autism plus learning disability. Health & Social Care in the Community, 21(5), 480-488.
• van Alphen, L.M., Dijker, A.J.M., van den Borne, H.H.W., & Curfs, L.M.G. (2009). The significance of neighbors: Views and experiences of people with intellectual disability on neighboring. Journal of Intellectual Disability Research, 53(8), 745-757. doi: 10.1111/j.1365-2788.2009.01188.x
• Verdonschot, M.M.L., de Witte, L.P., Reichrath, E., Buntinx, W.H.E., & Curfs, L.M.G. (2009). Community participation of people with an intellectual disability: A review of empirical findings. Journal of Intellectual Disability Research, 53(4), 303-318. doi: 10.1111/j.1365-2788.2008.01144.x
• Victor, C.R., Scambler, S.J., Bowling, A., & Bond, J. (2005). The prevalence of, and risk factors for, loneliness in later life: A survey of older people in Great Britain. Ageing & Society, 25, 357-375. doi: 10.1017/S0144686X04003332
• Walker, H.M., Calkins, C., Wehmeyer, M.L., Walker, L., Bacon, A., Palmer, S.B., … Johnson, D.R. (2011). A social-ecological approach to promote self-determination. Exceptionality, 19(1), 6-18. doi: 10.1080/09362835.2011.537220
• Walker, R.B., & Hiller, J.E. (2007). Places and health: A qualitative study to explore how older women living alone perceive the social and physical dimensions of their neighbourhoods. Social Science & Medicine, 65(6), 1154-1165. doi: 10.1016/j.socscimed.2007.04.031
• Webber, M., Reidy, H., Ansari, D., Stevens, M., & Morris, D. (2015). Enhancing social networks: A qualitative study of health and social care practice in UK mental health services. Health & Social Care in the Community, 23(2), 180-189.
• West, M., Kregel, J., Hernandez, A., & Hock, T. (1997). Everybody’s doing it: A national study of the use of natural supports in supported employment. Focus on Autism and Other Developmental Disabilities, 12(3), 175-181.
• White, G. W., Simpson, J. L., Gonda, C., Ravesloot, C., & Coble, Z. (2010). Moving from independence to interdependence: A conceptual model for better understanding community participation of centers for independent living consumers. Journal of Disability Policy Studies, 20(4), 233-240.
• World Health Organization. (2001). International Classification of Functioning. Retrieved from http://www.who.int/classifications/icf/en/.
• World Health Organization and World Bank Group. (2011). World Report on Disability. WHO: Geneva.
• Williams, J.M. (2008). Building social capital: The communityworks, inc. experience. Journal of Vocational Rehabilitation, 29(3), 157-163.
• Ziersch, A. M., & Baum, F. E. (2004). Involvement in civil society groups: Is it good for your health? Journal of Epidemiological Community Health, 58, 493-500.
• Ziersch, A.M., Baum, F.E., MacDougall, C., & Putland, C. (2005). Neighbourhood life and social capital: The implications for health. Social Science & Medicine, 60(1), 71-86. doi: 10.1016/j.socscimed.2004.04.027
1. The Interdependence Network affiliate agencies include: Community Living Mississauga (Mississauga, Ontario, Canada); communityworks, Inc. (Kansas City, Missouri, USA); Connect Communities (Vancouver, British Columbia, Canada); Hope Services (San Jose, California, USA); John F. Murphy Homes (Portland, Maine, USA); and United Cerebral Palsy/CLASS (Pittsburgh, Pennsylvania, USA)
|
2020-07-12 04:01:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22147071361541748, "perplexity": 6926.478016079376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657129517.82/warc/CC-MAIN-20200712015556-20200712045556-00038.warc.gz"}
|
https://gigaom.com/2011/09/18/the-tale-of-two-obama-scandals-connect-via-risky-tech-politics-money/
|
# Two Obama scandals blend risky tech, politics and money
President Obama has been hit with a one-two punch in recent weeks with two scandals that both involve risky tech startups, politics, money and the question of how to create innovation around national infrastructure, one for energy and the other for communications. While I covered solar maker Solyndra’s bankruptcy, which took down with it an over $500 million government loan, Stacey Higginbotham has been writing about wireless network company LightSquared’s plans and emerging political story. Here’s where I see these two narratives connecting in various places, and some potential lessons from these two scandals: 1. The political climate. Both Solyndra and LightSquared faced hearings from Republican committees last week, looking to investigate whether the White House stepped in and helped garner these companies federal support — for Solyndra that was the awarding of a$535 million loan, and for LightSquared it was both a conditional waiver from the FCC for its spectrum plans, and the potential interference from the White House of the testimony of an influential General. LightSquared is now the subject of House Armed Services Committee hearings, and Solyndra is fodder for the House Energy and Commerce Committee hearings.
As the Guardian put it: “One of the consequences of Republicans winning control of the House of Representatives was always going to be embarrassing probes by congressional committees. Now the results are starting to come out.” Cronyism is the new term of the month, and Republican Presidential candidate Michele Bachmann even started using it about Obama in her speeches.
2. Technology risk. Solyndra and LightSquared were born out of “big ideas” that delivered innovative, though high-risk, technology. Solyndra has already proven to be a bust — the risks took over — so I’ll start with it. Solyndra’s business plan focused on making solar panels without using silicon (the material that traditional solar panels use), with the idea that the price of silicon would go up. Instead, silicon prices went down, and it couldn’t compete with traditional solar panel makers. Solyndra’s other innovation was based around building a panel out of a series of tubes that cost more to produce, but the company thought would be cheaper to install, and thus would have lower overall costs. That never proved out in the market place because the company’s high manufacturing costs led it to run out of money before that thesis could be proven/tested over time.
LightSquared hasn’t yet become a disaster, but it’s ambitions are so large that Stacey thinks it could be building a castle made of sand. The company aims to create a combined satellite and terrestrial network that could provide a competitive wireless broadband alternative to the nation. LightSquared’s tech risks come from the interference its spectrum causes with GPS signals, as well as the massive amount of funding it needs and overall questions about the viability of a wholesale business model involving 4G.
3. High flying investors. These two companies wouldn’t have gotten off the ground without large amounts of funding from a few (and potentially influential) investors. For LightSquared, that’s Philip Falcone, the owner of Harbinger Capital, which has a $3 billion majority stake in the company. For Solyndra, it’s George Kaiser and his firm Argonaut Ventures, which held 39 percent of the solar maker, with Madrone Partners, USVP Venture Partners and Rockport Capital Partners holding far smaller shares. Solyndra raised over a billion dollars from private investors plus the$535 million government loan. While investors like these commonly have high appetites for risk, when the ventures start relying on government money or support, the public and lawmakers have decidedly less of an appetite for risk.
4. The government picking (the wrong?) winners. Solyndra received a government loan from the Department of Energy’s loan guarantee program, which doles out funds to the dozens of companies it has selected. In essence it picks energy winners and losers. It has now come to light that the DOE helped Solyndra restructure its loan earlier this year when Solyndra was imminently going to go bankrupt, and the DOE even restructured it so Argonaut would get its most recent funding paid back before the government loan (tax payer funds). In the case of LightSquared, the FCC has been granting waiver upon waiver for the company, even as its spectrum became suddenly (and surprisingly) problematic because of the GPS interference.
5. Ambitious infrastructure goals. LightSquared’s goal is to offer a wireless broadband competitor in an increasingly duopolistic U.S. wireless broadband market, and the FCC and the Obama administration share that goal. Solyndra was looking to contribute to the rise of clean power and distributed rooftop solar generation, which is a carbon-free decentralized power source, and could help fight climate change. The Obama administration and the DOE share that goal, too. But developing new infrastructure for both energy and communications, takes massive amounts of funds, and large partners. Both of these things are difficult for startups to achieve, with or without technology and business plan issues.
6. How does the government support tech innovation without attaching to risky startups? There are, no doubt, other ways for progressive administrations to spur technology innovation. For clean power, the DOE has other subsidy programs and projects that seem to be working better than its loan guarantee program. There’s the Investment Tax Credits, the 1603 treasury grant program, which are much less risky, as well as the R&D-focused APRA-E program that doles out small grants for university and startup research.
7. The incumbent technologies have a lot more political connections, and money. While in recent weeks Republicans are pointing to the potential political connections of Solyndra and its investors, and LightSqaured and its investors, the incumbent technologies that these companies are competing with have far more political influence and funding. Some of the top spenders of lobbying dollars in 2010 were AT&T (s t) and Verizon, (s vz) which don’t necessarily support wireless broadband competition, and coal-heavy power Southern Company, which has interests in keeping its fossil fuel infrastructure in the ground.
|
2021-09-23 11:53:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22318989038467407, "perplexity": 4845.23152261119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057421.82/warc/CC-MAIN-20210923104706-20210923134706-00237.warc.gz"}
|
https://www.gbaglobal.net/resources/categories/classroom
|
This executive summary sets the stage for the comprehensive report in which the Government Bloc...
Author of Publication (Name)
This executive summary sets the stage for the comprehensive report in which the Government Bloc...
Author of Publication (Name)
This executive summary sets the stage for the comprehensive report in which the Government Bloc...
Author of Publication (Name)
This executive summary sets the stage for the comprehensive report in which the Government Bloc...
Author of Publication (Name)
This executive summary sets the stage for the comprehensive report in which the Government Bloc...
Author of Publication (Name)
This executive summary sets the stage for the comprehensive report in which the Government Bloc...
Author of Publication (Name)
Randall Lee Pires, Joshua Armah
This executive summary sets the stage for the comprehensive report in which the Government Bloc...
Author of Publication (Name)
[email protected]
This executive summary sets the stage for the comprehensive report in which the Government Bloc...
Author of Publication (Name)
[email protected]
This executive summary sets the stage for the comprehensive report in which the Government Bloc...
Author of Publication (Name)
[email protected]
This executive summary sets the stage for the comprehensive report in which the Government Bloc...
Author of Publication (Name)
[email protected]
This executive summary sets the stage for the comprehensive report in which the Government Bloc...
Author of Publication (Name)
[email protected]
This executive summary sets the stage for the comprehensive report in which the Government Bloc...
Author of Publication (Name)
[email protected]
This executive summary sets the stage for the comprehensive report in which the Government Bloc...
Author of Publication (Name)
[email protected]
This executive summary sets the stage for the comprehensive report in which the Government Bloc...
Author of Publication (Name)
[email protected]
This executive summary sets the stage for the comprehensive report in which the Government Bloc...
Author of Publication (Name)
[email protected]
This executive summary sets the stage for the comprehensive report in which the Government Bloc...
Author of Publication (Name)
[email protected]
This executive summary sets the stage for the comprehensive report in which the Government Bloc...
Author of Publication (Name)
[email protected]
This executive summary sets the stage for the comprehensive report in which the Government Bloc...
Author of Publication (Name)
[email protected]
This executive summary sets the stage for the comprehensive report in which the Government Bloc...
Author of Publication (Name)
hsts
Showing 19 results
|
2022-05-23 15:18:46
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9641586542129517, "perplexity": 2450.4089902692454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558030.43/warc/CC-MAIN-20220523132100-20220523162100-00232.warc.gz"}
|
https://homework.cpm.org/category/CCI_CT/textbook/apcalc/chapter/7/lesson/7.1.1/problem/7-10
|
Home > APCALC > Chapter 7 > Lesson 7.1.1 > Problem7-10
7-10.
Sketch the region bounded by $y = x^4$ and $y = 1$. Calculate the exact area of the region.
What are your limits of integration? Which function has higher $y$-values and which has lower?
|
2021-09-26 03:19:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8511502742767334, "perplexity": 1464.4991570389286}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057796.87/warc/CC-MAIN-20210926022920-20210926052920-00420.warc.gz"}
|
https://proofwiki.org/wiki/Parseval%27s_Theorem
|
# Parseval's Theorem
## Theorem
Let $f$ be a function square-integrable over $\left[-\pi \ldots \pi\right]$ given by the Fourier series,
$\displaystyle f(x) \cong \frac {a_0} 2 + \sum_{n=1}^\infty \left(a_n \cos\left(nx\right) + b_n\sin\left(nx\right)\right)$
Then,
$\displaystyle \frac 1 \pi \int_{-\pi}^\pi \left|f^2(x)\right|\mathrm dx = \frac {a_0^2} 2 + \sum_{n=1}^\infty \left(a^2_n + b^2_n\right)$
## Source of Name
This entry was named for Marc-Antoine Parseval.
|
2018-02-24 19:40:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7820825576782227, "perplexity": 5487.432633450625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815934.81/warc/CC-MAIN-20180224191934-20180224211934-00190.warc.gz"}
|
http://mathhelpforum.com/calculus/58535-stationary-points.html
|
# Math Help - Stationary points
1. ## Stationary points
Prove that the polynomial
$f=(x^2+y^2)(x^2+y^2+1)z+z^3+x+y$
doesn't have real stationary points.
Thanks!!!
2. Originally Posted by roporte
Prove that the polynomial
$f=(x^2+y^2)(x^2+y^2+1)z+z^3+x+y$
doesn't have real stationary points.
Thanks!!!
Start by calculating the partial derivatives and equating them to zero. Can you do that? Post your answers.
|
2015-09-01 04:11:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8762442469596863, "perplexity": 3916.3481251942617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645151768.51/warc/CC-MAIN-20150827031231-00099-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://www.rdocumentation.org/packages/base/versions/3.6.0/topics/nargs
|
# nargs
0th
Percentile
##### The Number of Arguments to a Function
When used inside a function body, nargs returns the number of arguments supplied to that function, including positional arguments left blank.
Keywords
programming
##### Usage
nargs()
##### Details
The count includes empty (missing) arguments, so that foo(x,,z) will be considered to have three arguments (see ‘Examples’). This can occur in rather indirect ways, so for example x[] might dispatch a call to [.some_method(x, ) which is considered to have two arguments.
This is a primitive function.
##### References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole.
args, formals and sys.call.
library(base) # NOT RUN { tst <- function(a, b = 3, ...) {nargs()} tst() # 0 tst(clicketyclack) # 1 (even non-existing) tst(c1, a2, rr3) # 3 foo <- function(x, y, z, w) { cat("call was ", deparse(match.call()), "\n", sep = "") nargs() } foo() # 0 foo(, , 3) # 3 foo(z = 3) # 1, even though this is the same call nargs() # not really meaningful # }
|
2019-06-20 23:54:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2625381350517273, "perplexity": 8278.26856219582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999291.1/warc/CC-MAIN-20190620230326-20190621012326-00416.warc.gz"}
|
https://docs.wpilib.org/en/stable/docs/yearly-overview/yearly-changelog.html
|
# New for 2021¶
A number of improvements have been made to FRC® Control System software for 2021. This article will describe and provide a brief overview of the new changes and features as well as a more complete changelog for C++/Java WPILib changes. This document only includes the most relevant changes for end users, the full list of changes can be viewed on the various WPILib GitHub repositories.
Important
Due to internal GradleRIO changes, it is necessary to update previous years projects. After Installing WPILib for 2021, any 2020 projects must be imported to be compatible.
## Major Features¶
• A hardware-level WebSocket interface has been added to allow remote access to robot code being simulated in a desktop environment.
• Support for the Romi robot platform. Romi robot code runs in the desktop simulator environment and talks to the Romi via the new WebSocket interface.
• A new robot data visualizer – Glass – has been added. Glass has a similar UI to the simulator GUI and supports much of the same features; however, Glass can be used as a standalone dashboard and is not tied in to the robot program.
• The WPILib installer has been rewritten to support macOS and Linux and to improve ease of use.
• The macOS installer is notarized, eliminating the need for Gatekeeper bypass.
• Please see the installation instructions as it differs from previous years.
• Added support for model-based control with Kalman filters, extended Kalman filters, unscented Kalman filters, and linear-quadratic regulators. See Introduction to State-Space Control for more information.
## WPILib¶
### Breaking Changes¶
• curvature_t moved from frc to units namespace (C++)
• Trajectory constraint methods are now const in C++. Teams defining their own custom constraints should mark the MaxVelocity() and MinMaxAcceleration() methods as const.
• The Field2d class (added midway through the 2020 season) was moved from the simulation package (edu.wpi.first.wpilibj.simulation / frc/simulation/) to the SmartDashboard package (edu.wpi.first.wpilibj.smartdashboard / frc/SmartDashboard/). This allows teams to send their robot position over NetworkTables to be viewed in Glass. The Field2d instance can be sent using SmartDashboard.putData("Field", m_field2d) / frc::SmartDashboard::PutData("Field", &m_field2d) or by using one of the Shuffleboard methods. This must be done in order to see the Field2d in the Simulator GUI.
• PWM Speed Controllers get() method has been modified to return the same value as was set() regardless of inversion. The value that still stakes into account the inversion can be retrieved with the getSpeed() method. This affects the following classes DMC60, Jaguar, PWMSparkMax, PWMTalonFX, PWMTalonSRX, PWMVenom, PWMVictorSPX, SD540, Spark, Talon, Victor, and VictorSP classes.
### New Command-Based Library¶
• Watchdog and epoch reporting has been added to the command scheduler. This will let teams know exactly which command or subsystem is responsible for a loop overrun if one occurs.
• Added a withName() command decorator for Java teams. This lets teams set the name of a particular command using the decorator pattern.
• Added a NetworkButton class, allowing users to use a boolean NetworkTableEntry as a button to trigger commands.
• Added a simulationPeriodic() method to Subsystem. This method runs periodically during simulation, in addition to the regular periodic() method.
### General Library¶
• Holonomic Drive Controller - A controller that teams with holonomic drivetrains (i.e. swerve and mecanum) can use to follow trajectories. This also supports custom Rotation2d heading inputs that are separate from the trajectory because heading dynamics are decoupled from translational movement in holonomic drivetrains.
• Added support for scheduling functions more often than the robot loop via addPeriodic() in TimedRobot. Previously, teams had to make a Notifier to run feedback controllers more often than the TimedRobot loop period of 20ms (running TimedRobot more often than this is not advised). Now, users can run feedback controllers more often than the main robot loop, but synchronously with the TimedRobot periodic functions so there aren’t any thread safety issues. See an example here.
• Added a toggle() function to Solenoid and DoubleSolenoid.
• Added a SpeedControllerGroup constructor that takes a std::vector<> (C++) / SpeedController[] (Java), allowing the list to be constructed dynamically. (Teams shouldn’t use this directly. This is only intended for bindings in languages like Python.)
• Added methods (isOperatorControlEnabled() and isAutonomousEnabled()) to check game and enabled state together.
• Added a ScopedTracer class for C++ teams to be able to time pieces of code. Simply instantiate the ScopedTracer at the top of a block of code and the time will be printed to the console when the instance goes out of scope.
• Added a static method fromHSV(int h, int s, int v) to create a Color instance from HSV values.
• Added RT priority constructor to Notifier in C++. This makes the thread backing the Notifier run at real-time priority, reducing timing jitter.
• Added a DriverStation.getInstance().isJoystickConnected(int) method to check if a joystick is connected to the Driver Station.
• Added a DriverStation.getInstance().silenceJoystickConnectionWarning(boolean) method to silence the warning when a joystick is not connected. This setting has no effect (i.e. warnings will continue to be printed) when the robot is connected to a real FMS.
• Added a constructor to Translation2d that takes in a distance and angle. This is effectively converting from polar coordinates to Cartesian coordinates.
• Added EllipticalRegionConstraint, RectangularRegionConstraint, and MaxVelocityConstraint to allow constraining trajectory velocity in a certain region of the field.
• Added equals() operator to the Trajectory class to compare two or more trajectories.
• Added zero-arg constructor to the Trajectory class in Java that creates an empty trajectory.
• Added a special exception to catch trajectory constraint misbehavior. This notifies users when user-defined constraints are misbehaving (i.e. min acceleration is greater than max acceleration).
• Added a getRotation2d() method to the Gyro interface. This method automatically takes care of converting from gyro conventions to geometry conventions.
• Added angular acceleration units for C++ teams. These are available in the <units/angular_acceleration.h> header.
• Added X and Y component getters in Pose2d - getX() and getY() in Java, X() and Y() in C++.
• Added implicit conversion from degree_t to Rotation2d in C++. This allows teams to use a degree value (i.e. 47_deg) wherever a Rotation2d is required.
• Fixed bug in path following examples where odometry was not being reset to the starting pose of the trajectory.
• Fixed some spline generation bugs for advanced users who were using control vectors directly.
• Fixed theta controller continuous input in swerve examples. This fixes the behavior where the shortest path is not used during drivetrain rotation.
• Deprecated units.h, use individual units headers instead which speeds compile times.
## Simulation¶
• Added keyboard virtual joystick simulation support.
• Added Mechanism2D for visualizing mechanisms in simulation.
• Added simulation physics classes for common robot mechanisms (DrivetrainSim, ElevatorSim, SingleJointedArmSim, and FlywheelSim)
## Shuffleboard¶
• Number Slider now displays the text value
• Graphing Widget now uses ChartFX, a high performance graphing library
• Fixed decimal digit formatting with large numbers
• Size and position can now be set separately in the Shuffleboard API
## SmartDashboard¶
• Host IP can be specified in configuration.
## PathWeaver¶
• Added support for reversed splines
• The coordinate system in the exported JSON has changed to be compatible with the simulator GUI. See Importing a PathWeaver JSON for more information.
• Added a vendordep task for downloading vendor JSONs or fetching them from the user wpilib folder
• Added a gradlerio.vendordep.folder.path property to set a non-default location for the vendor JSON folder
• Renamed the wpi task (that prints current versions of WPILib and tools) to wpiVersions
• Added the ability to set environment variables during simulation
• To set the environment variable HALSIMWS_HOST use:
sim {
envVar "HALSIMWS_HOST", "10.0.0.2"
}
## CSCore¶
• Now only lists streamable devices on Linux platforms.
## Visual Studio Code Extension¶
• Visual Studio Code has been updated to 1.52.1
• Updated Java and C++ language extensions
• Driverstation sim extension is now enabled by default
• Project importer now retains the commands version used in the original project
• Clarified the text on the new project and project importer screens
• Fixed import corrupting binary files
• Fixed link order in C++ build.gradle projects
• Updated “Change Select Default Simulate Extension Setting” command to work with multiple sim extensions
## RobotBuilder¶
• Updated to be compatible with the new command based framework and PID Controller.
• Due to the major changes in templates, RobotBuilder will not accept a save file from a previous year. You must regenerate the yaml save file and export to a new directory.
• A version of RobotBuilder that still exports to the old command based framework has included with the installer and is called RobotBuilder-Old
• C++: use uniform initialization of objects in header
• C++: fixed case of includes so that code compiles on case-sensitive filesystems
• Use project name as default for save file
• Fixed export of wiring file
• Fixed line-endings for scripts so they work on MacOS/Linux
|
2021-10-17 18:04:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2464010864496231, "perplexity": 7726.365953518649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585181.6/warc/CC-MAIN-20211017175237-20211017205237-00336.warc.gz"}
|
https://index.mirasmart.com/ISMRM2019/PDFfiles/2464.html
|
2464
A 3D Fully Convolutional Network with various input dimension for brain extraction in MRI
Xibo Zhang1, Zhe Liu2, Pascal Spincemaille2, and Yi Wang2
1Tsinghua University, Beijing, China, 2Cornell University, New York, NY, United States
Synopsis
A 3D Fully Convolutional Network is proposed using cascade architecture and combining two different channels to overcome the low accuracy of traditional methods. The network is applied to do the brain extraction.
Target Audience
Researchers and clinicians interested in MRI image processing.
Introduction
Brain extraction, also known as skull stripping, is a crucial process in magnetic resonance imaging (MRI), especially functional MRI. It is used as the first step for many MRI processing pipelines. Traditional topology based brain extraction methods (1) have been proposed in specific dataset, although the accuracy might be un-satisfying for general use. Machine learning based extraction methods have been recently proposed based on convolutional neural network (CNN) with a single input size (4). In this work, we explore a combination of multiple input sizes in CNN for improved accuracy in brain extraction.
Methods
Network: A 3D CNN was constructed as in Figure 1. We divided the original 3D MRI volume into patches with two different sizes, in order to utilize both local and global information in determining the boundary of the brain: A small patch as used in previous studies (6) focused on detailed information within a localized neighborhood and enabled a deeper network; Meanwhile, a larger patch provided more global information. Intermediate feature maps were connected to later classification layer via shortcut connection. The network produced patches of size , which were then compiled to recover the full volume.
Data acquisition and processing: We obtained 18 T1w scans of healthy subject (voxel size 0.94x1.5x0.94 mm3) from the Internet Brain Segmentation Repository (IBSR) dataset and 10 T1w scans of patient from Weill Cornell clinical dataset.
Training: For IBSR dataset, 16 of 18 subjects were used for training, giving a total number of 32000 patches, in which were selected randomly as validation set. For clinical dataset, 9 of 10 subjects were used for training. We employed the dice loss function as in (6) in training the network with Adam optimizer (3) (learning rate 0.001, epoch 30).
Post-processing and analysis: After getting the result of the network, we obtain a precise border. Considering the goal of our task, we can keep the inside and discard the voxel outside the border. Thus, we create a method to find a seed in the brain center and then fulfill the whole brain. There is also a process making the result smooth. Dice score, sensitivity and specificity were used to evaluate the network performance on the hold-out data in each dataset, respectively.
$Dice = \frac{2|P∩R|}{|P|+|R|} = \frac{2TP}{2TP+FP+FN}$
$Sensitivity= \frac{TP}{TP+FN}$
$Specificity= \frac{TN}{TN+FP}$
Using the notion of true positive (TP), true negative (TN), false positive (FP) and false negative (FN). The proposed network was compared with several alternative methods for brain extraction (2), as shown in Table 1.
Results:
The quantitative comparison on IBSR dataset was shown in Table 1. The proposed network achieved the highest Dice score and specificity, and the second highest sensitivity (97.6%) compared to the highest one (99.1%) given by HWA(1). Figure 2 compared the ground truth and the proposed brain extraction on a representative test subject. The proposed network outperformed BET in quantitative metrics on the clinical dataset, as shown in Table 2. A qualitative comparison of brain extraction was shown in Figure 3. Table 2 also indicated that the post-processing improved the dice score and specificity.
Conclusion:
We propose a novel 3D fully convolutional network which combines two different input sizes for brain extraction.
Acknowledgements
We thank everyone in Wang lab for inspiring discussions and selfless assistances.
References
1. Smith, Stephen M. "Fast robust automated brain extraction." Human brain mapping 17.3 (2002): 143-155.
2. Palanisamy, Kalavathi & Prasath, Surya. Methods on Skull Stripping of MRI Head Scan Images—a Review. Journal of Digital Imaging.
3. Kingma DP, Ba J. Adam: A method for stochastic optimization. arXiv preprint arXiv:14126980 2014.
4. Kleesiek J, Urban G, Hubert A, et al. Deep MRI brain extraction: a 3D convolutional neural network for skull stripping[J]. NeuroImage, 2016, 129: 460-469.
5. Zhao, Gengyan & Liu, Fang & Oler, Jonathan & E. Meyerand, Mary & H. Kalin, Ned & M. Birn, Rasmus. (2018). Bayesian Convolutional Neural Network Based MRI Brain Extraction on Nonhuman Primates. NeuroImage 2017
6. Dolz, J., Desrosiers, C., Ben Ayed, I., 2017. 3D fully convolutional networks for subcortical segmentation in MRI: a large-scale study. NeuroImage 2017.
Figures
Figure 1. Network architecture. Kernel size and feature number are denoted at each convolutional layer. Input patch of size and is provided to a separate branch of the network, respectively. One max pooling layer is added in the branch for patch . The network output is a voxel-wise brain/non-brain classification.
Table 1. Dice score, sensitivity and specificity on the IBSR dataset for different methods. The best candidate in each category was highlighted in bold.
Figure 2. Comparison of the ground truth (red) and our brain extraction result (blue) on a test case, displayed in axial, sagittal and coronal views.
Table 2. Dice score, sensitivity and specificity on the clinical dataset for BET, proposed network with and without post-processing, respectively. Post-processing improved the dice score and specificity.
Figure 3. Comparison of the BET method (yellow), proposed network (blue) and the ground truth (red).
Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
2464
|
2022-05-26 18:37:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5104811787605286, "perplexity": 3446.922201275591}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662619221.81/warc/CC-MAIN-20220526162749-20220526192749-00164.warc.gz"}
|
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/1200/2/h/f/
|
# Properties
Label 1200.2.h.f Level $1200$ Weight $2$ Character orbit 1200.h Analytic conductor $9.582$ Analytic rank $0$ Dimension $2$ CM discriminant -3 Inner twists $4$
# Learn more
## Newspace parameters
Level: $$N$$ $$=$$ $$1200 = 2^{4} \cdot 3 \cdot 5^{2}$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 1200.h (of order $$2$$, degree $$1$$, minimal)
## Newform invariants
Self dual: no Analytic conductor: $$9.58204824255$$ Analytic rank: $$0$$ Dimension: $$2$$ Coefficient field: $$\Q(\sqrt{-3})$$ Defining polynomial: $$x^{2} - x + 1$$ Coefficient ring: $$\Z[a_1, a_2, a_3]$$ Coefficient ring index: $$2$$ Twist minimal: yes Sato-Tate group: $\mathrm{U}(1)[D_{2}]$
## $q$-expansion
Coefficients of the $$q$$-expansion are expressed in terms of a primitive root of unity $$\zeta_{6}$$. We also show the integral $$q$$-expansion of the trace form.
$$f(q)$$ $$=$$ $$q + ( -1 + 2 \zeta_{6} ) q^{3} + ( 1 - 2 \zeta_{6} ) q^{7} -3 q^{9} +O(q^{10})$$ $$q + ( -1 + 2 \zeta_{6} ) q^{3} + ( 1 - 2 \zeta_{6} ) q^{7} -3 q^{9} + 5 q^{13} + ( -5 + 10 \zeta_{6} ) q^{19} + 3 q^{21} + ( 3 - 6 \zeta_{6} ) q^{27} + ( -5 + 10 \zeta_{6} ) q^{31} + 10 q^{37} + ( -5 + 10 \zeta_{6} ) q^{39} + ( -7 + 14 \zeta_{6} ) q^{43} + 4 q^{49} -15 q^{57} -13 q^{61} + ( -3 + 6 \zeta_{6} ) q^{63} + ( 9 - 18 \zeta_{6} ) q^{67} -10 q^{73} + ( -10 + 20 \zeta_{6} ) q^{79} + 9 q^{81} + ( 5 - 10 \zeta_{6} ) q^{91} -15 q^{93} + 5 q^{97} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$2q - 6q^{9} + O(q^{10})$$ $$2q - 6q^{9} + 10q^{13} + 6q^{21} + 20q^{37} + 8q^{49} - 30q^{57} - 26q^{61} - 20q^{73} + 18q^{81} - 30q^{93} + 10q^{97} + O(q^{100})$$
## Character values
We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/1200\mathbb{Z}\right)^\times$$.
$$n$$ $$401$$ $$577$$ $$751$$ $$901$$ $$\chi(n)$$ $$-1$$ $$1$$ $$-1$$ $$1$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
1151.1
0.5 − 0.866025i 0.5 + 0.866025i
0 1.73205i 0 0 0 1.73205i 0 −3.00000 0
1151.2 0 1.73205i 0 0 0 1.73205i 0 −3.00000 0
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
Char Parity Ord Mult Type
1.a even 1 1 trivial
3.b odd 2 1 CM by $$\Q(\sqrt{-3})$$
4.b odd 2 1 inner
12.b even 2 1 inner
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 1200.2.h.f yes 2
3.b odd 2 1 CM 1200.2.h.f yes 2
4.b odd 2 1 inner 1200.2.h.f yes 2
5.b even 2 1 1200.2.h.d 2
5.c odd 4 2 1200.2.o.h 4
12.b even 2 1 inner 1200.2.h.f yes 2
15.d odd 2 1 1200.2.h.d 2
15.e even 4 2 1200.2.o.h 4
20.d odd 2 1 1200.2.h.d 2
20.e even 4 2 1200.2.o.h 4
60.h even 2 1 1200.2.h.d 2
60.l odd 4 2 1200.2.o.h 4
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
1200.2.h.d 2 5.b even 2 1
1200.2.h.d 2 15.d odd 2 1
1200.2.h.d 2 20.d odd 2 1
1200.2.h.d 2 60.h even 2 1
1200.2.h.f yes 2 1.a even 1 1 trivial
1200.2.h.f yes 2 3.b odd 2 1 CM
1200.2.h.f yes 2 4.b odd 2 1 inner
1200.2.h.f yes 2 12.b even 2 1 inner
1200.2.o.h 4 5.c odd 4 2
1200.2.o.h 4 15.e even 4 2
1200.2.o.h 4 20.e even 4 2
1200.2.o.h 4 60.l odd 4 2
## Hecke kernels
This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(1200, [\chi])$$:
$$T_{7}^{2} + 3$$ $$T_{11}$$ $$T_{13} - 5$$ $$T_{23}$$
## Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ $$T^{2}$$
$3$ $$3 + T^{2}$$
$5$ $$T^{2}$$
$7$ $$3 + T^{2}$$
$11$ $$T^{2}$$
$13$ $$( -5 + T )^{2}$$
$17$ $$T^{2}$$
$19$ $$75 + T^{2}$$
$23$ $$T^{2}$$
$29$ $$T^{2}$$
$31$ $$75 + T^{2}$$
$37$ $$( -10 + T )^{2}$$
$41$ $$T^{2}$$
$43$ $$147 + T^{2}$$
$47$ $$T^{2}$$
$53$ $$T^{2}$$
$59$ $$T^{2}$$
$61$ $$( 13 + T )^{2}$$
$67$ $$243 + T^{2}$$
$71$ $$T^{2}$$
$73$ $$( 10 + T )^{2}$$
$79$ $$300 + T^{2}$$
$83$ $$T^{2}$$
$89$ $$T^{2}$$
$97$ $$( -5 + T )^{2}$$
show more
show less
|
2021-06-21 11:43:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9732063412666321, "perplexity": 6873.403781817437}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00102.warc.gz"}
|
https://zbmath.org/?q=an%3A0769.58056
|
# zbMATH — the first resource for mathematics
On the nodal line of the second eigenfunction of the Laplacian in $$\mathbb{R}^ 2$$. (English) Zbl 0769.58056
Let $$\Omega \subset \mathbb{R}^ 2$$ be a bounded convex domain with $$C^ \infty$$ boundary, let $$u_ 2$$ be a nontrivial solution of the Dirichlet problem $\begin{cases} \Delta u_ 2 + \lambda_ 2u_ 2 = 0 &\text{in $$\Omega$$},\\u_ 2 = 0 & \text{on $$\partial\Omega$$}\end{cases}$ where $$\Delta = \sum^ 2_{i = 1}{\partial^ 2\over \partial x^ 2_ i}$$ is the Laplace operator and $$\lambda_ 2$$ its second eigenvalue. The nodal line $$N$$ is given by $$N = \{\overline{x \in \Omega: u_ 2(x) = 0}\}$$. The main result of the paper is
Theorem. The nodal line $$N$$ intersects the boundary $$\partial\Omega$$ at exactly two points. In particular, $$N$$ does not enclose a compact subregion of $$\Omega$$.
Reviewer: C.Bär (Bonn)
##### MSC:
58J50 Spectral problems; spectral geometry; scattering theory on manifolds 58J32 Boundary value problems on manifolds
Full Text:
|
2021-02-25 16:46:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8086123466491699, "perplexity": 330.50850125993645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351374.10/warc/CC-MAIN-20210225153633-20210225183633-00222.warc.gz"}
|
https://ruor.uottawa.ca/handle/10393/6803
|
### Oat seed storage protein genes: Promoter studies in transgenic tobacco plants.
##### Description
Title: Oat seed storage protein genes: Promoter studies in transgenic tobacco plants. Authors: Potier, Bernard. Date: 1993 Abstract: The seed storage proteins of oat are mainly represented by the globulins (75%) and the prolamins (10%). These proteins are only found in the endosperm and accumulate within protein bodies during the maturation of the oat seeds. Three genomic sequences encoding globulin polypeptides and one avenin genomic sequence encoding an oat prolamin (avenin) have been isolated and characterized. The respective promoters of these genomic clones were fused to the coding sequence of the $\beta$-glucuronidase reporter gene (GUS). These constructs, together with the entire sequence of a globulin gene, were transferred into tobacco via Agrobacterium tumefaciens. Analysis of the transgenic tobacco seeds showed for the first time that the promoters of the oat globulin and avenin genes are able to regulate the expression of the GUS sequence in an endosperm-specific and developmentally controlled manner in a dicot plant, as in oat seeds with the original seed storage protein gene. One of the globulin promoters was shown to be probably inactive, whereas two other promoters appear to direct strong expression in the seeds of transgenic tobacco plants. A deletion analysis on one of the functional promoters demonstrated that a portion of the promoter upstream of nucleotide -259 (relative to the start of transcription as determined in oat) was required for expression. Sequence analysis of the globulin promoters showed the lack of conserved elements which are found in other storage protein gene promoters, and believed to play an important role in the regulation of seed storage protein genes. It was nonetheless demonstrated in this study that the absence of such elements did not prevent a correct functionality of the oat globulin promoters in transgenic tobacco seeds. The avenin promoter, when fused to the GUS sequence, also showed strong expression in the endosperm of transgenic tobacco seeds. Sequence analysis of the upstream region of the avenin gene confirmed the presence of a highly conserved 'prolamin box'. The possible role of this element was therefore further demonstrated in this work. This work has also showed for the first time a difference in the choice of transcriptional start sites of two monocot (oat) seed storage protein genes after transfer into a dicot species (tobacco). URL: http://hdl.handle.net/10393/6803http://dx.doi.org/10.20381/ruor-11456 Collection Thèses, 1910 - 2010 // Theses, 1910 - 2010
|
2022-10-01 21:31:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40603309869766235, "perplexity": 8522.672147891906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00433.warc.gz"}
|
https://stats.stackexchange.com/questions/linked/163957?sort=newest
|
268 views
### The rationale behind the “fail to reject the null” jargon in hypothesis testing?
In hypothesis testing with a Null and the Alternative hypothesis (assuming that these two cases are mutually inclusive of all the possible "truths"), we usually base our hypothesis selection criterion ...
89 views
### How can we reject null hypothesis based on p-values? [closed]
In textbook statistical tests, we usually calculate the probability of observing the data we observed given that the null hypothesis is true, i.e. $P[D|H_0]$. If this probability is small (e.g. \$<0....
149 views
### What fraction to use in the formulation of a hypothesis test ? - Population fraction versus sample fraction
Imagine that I am studying racial discrimination. In particular, whether a white candidate (for a job) is chosen more often than an equally qualified minority candidate. I ask people to choose ...
212 views
### what “rejecting a null hypothesis” actually means?
I have a sample dateset consists two variables as "Score"(independent continuous) consists score of students in maths and school(dependent categorical)having 3 categories as "S1","S2","S3". I'd ...
689 views
### When a one-tailed test passes but a two-tailed test does not
(Sorry if this is obvious or is a duplicate. I couldn't find one.) Suppose two researchers are studying whether average height of some population has changed significantly. Researcher 1 hypothesizes ...
16k views
### If we fail to reject the null hypothesis in a large study, isn't it evidence for the null?
A basic limitation of null hypothesis significance testing is that it does not allow a researcher to gather evidence in favor of the null (Source) I see this claim repeated in multiple places, but I ...
2k views
### Using p-value to compute the probability of hypothesis being true; what else is needed?
Question: One common misunderstanding of p-values is that they represent the probability of the null hypothesis being true. I know that's not correct and I know that p-values only represent the ...
232 views
### Null hypothesis for the one- and two-tailed test
From the Online Stat Book, Chapter 11.5: The null hypothesis for the two-tailed test is π = 0.5. By contrast, the null hypothesis for the one-tailed test is π ≤ 0.5. Why is that so? The null ...
237 views
### Is this null hypothesis wrong?
The Wikipedia article Null hypothesis says in its opening paragraph: In inferential statistics, the term "null hypothesis" is a general statement or default position that there is no relationship ...
37 views
### Interpreting hypothesis testing result (assuming that the null hypothesis is true) [duplicate]
I have a doubt on how to interpret a result of a hypothesis test. For example, a scenario where I have an existing configuration and also a new configuration. I am trying to check if with the new ...
939 views
### Choice of null and alternative hypothesis
A firm producing tobacco cigarettes claims that it has discovered a new technique for curing tobacco leaves, that results in an average nicotine content of a cigarette of less than 1.5 mg. To test ...
53 views
### Power of a test in testing of hypothesis
Power in testing of hypothesis is defined as the probability of making the correct decision. Then why do text books describe power=1-P(type 2 error) and not 1-P(type 1 error)
317 views
### Why do we trust the p-value when fitting a regression on a single sample?
I have code below that builds a linear model for a set of data: ...
|
2020-02-28 19:57:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9120270609855652, "perplexity": 1112.1166587559867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147628.27/warc/CC-MAIN-20200228170007-20200228200007-00327.warc.gz"}
|
https://math.stackexchange.com/questions/3168415/show-that-a-system-of-linear-equations-has-a-unique-solution
|
# Show that a system of linear equations has a unique solution
Suppose that I have the following system of $$K+1$$ linear equations with $$K\geq 3$$ $$\begin{cases} \lambda_j\times \lambda_h'=\lambda'_j\times \lambda_h & \text{ for K different pairs (j,h) taken from A}\\ \lambda_1+...+\lambda_K=1 \end{cases}$$
where
• the unknowns are $$(\lambda_1,...,\lambda_K)$$
• $$(\lambda'_1,...,\lambda_K')$$ are known parameters such that $$\lambda'_1+...+\lambda'_K=1$$
• $$\lambda_1>0,...,\lambda_K>0$$ and $$\lambda'_1>0,...,\lambda'_K>0$$
• $$A\equiv \{(1,2),(1,3),...,(1,K),(2,K),(3,K),...,(K-1,K)\}$$ with cardinality $$2K-3$$.
Question: could you help me to show that this system has a unique solution that is $$\lambda_1=\lambda'_1,...,\lambda_K=\lambda'_K$$
For example, when $$K=5$$, we have that $$A\equiv \{(1,2),(1,3),(1,4),(1,5), (2,5),(3,5),(4,5)\}$$ and a specification of the system could be $$\begin{cases} \lambda_1\times \lambda_5'=\lambda'_1\times \lambda_5\\ \lambda_2\times \lambda_5'=\lambda'_2\times \lambda_5\\ \lambda_1\times \lambda_4'=\lambda'_1\times \lambda_4\\ \lambda_1\times \lambda_3'=\lambda'_1\times \lambda_3\\ \lambda_3\times \lambda_5'=\lambda'_3\times \lambda_5\\ \lambda_1+\lambda_2+\lambda_3+\lambda_4+\lambda_5=1 \end{cases}$$
which implies $$\begin{cases} \lambda_2=\lambda_1\times \frac{\lambda_2'}{\lambda_1'}\\ \lambda_3=\lambda_1\times \frac{\lambda_3'}{\lambda_1'}\\ \lambda_4=\lambda_1\times \frac{\lambda_4'}{\lambda_1'}\\ \lambda_5=\lambda_1\times \frac{\lambda_5'}{\lambda_1'}\\ \end{cases}$$ Then, replacing this in $$\lambda_1+\lambda_2+\lambda_3+\lambda_4+\lambda_5=1$$ we get $$\lambda_1\times \Big(\overbrace{1+\frac{\lambda_2'}{\lambda_1'}+ \frac{\lambda_3'}{\lambda_1'}+ \frac{\lambda_4'}{\lambda_1'}+ \frac{\lambda_5'}{\lambda_1'}}^{=1/\lambda_1'}\Big)=1$$ which implies $$\lambda_1=\lambda_1'$$ and hence $$\lambda_2=\lambda_2',\lambda_3=\lambda_3',\lambda_4=\lambda_4',\lambda_5=\lambda_5'$$.
I am unable to generalise this procedure to any $$K$$ elements from the set $$A$$. Could you help?
We can rewrite $$\lambda_j\times \lambda_h'=\lambda'_j\times \lambda_h$$ as $$\frac{\lambda_j}{\lambda_j'} = \frac{\lambda_h}{\lambda_h'}$$. If we can show from these $$K$$ equations that $$\frac{\lambda_1}{\lambda_1'} = \frac{\lambda_2}{\lambda_2'} = \dots = \frac{\lambda_K}{\lambda_K'}$$ then there is a constant $$C$$ such that $$\lambda_i = C \lambda_i'$$ for all $$i$$; from knowing that $$\lambda_1 + \dots + \lambda_k = \lambda_1'+ \dots + \lambda_K'= 1,$$ we deduce that $$C=1$$ and therefore $$\lambda_i = \lambda_i'$$ for all $$i$$.
However, we cannot necessarily conclude that all the ratios $$\frac{\lambda_j}{\lambda_j'}$$ are equal. This depends on the $$K$$ specific equations we chose. Let $$G$$ be the graph with vertex set $$\{1,\dots,K\}$$ and an edge $$hj$$ whenever we choose the pair $$(h,j)$$ to form an equation. The requirement to have a unique solution is that $$G$$ must be connected. If so, for any $$a,b \in \{1,\dots,K\}$$ there is a path from $$a$$ to $$b$$ in $$G$$, and we get $$\frac{\lambda_a}{\lambda_a'} = \dots = \frac{\lambda_b}{\lambda_b'}$$ by transitivity along that path.
But here is an example (for $$K=5$$) without a unique solution. Choose the $$5$$ equations \begin{align} \lambda_1 \lambda_2' &= \lambda_1'\lambda_2 \\ \lambda_1 \lambda_3' &= \lambda_1'\lambda_3 \\ \lambda_1 \lambda_5' &= \lambda_1'\lambda_5 \\ \lambda_2 \lambda_5' &= \lambda_2'\lambda_5 \\ \lambda_3 \lambda_5' &= \lambda_3'\lambda_5 \ \end{align} Together, these equations are equivalent to $$\frac{\lambda_1}{\lambda_1'} = \frac{\lambda_2}{\lambda_2'} = \frac{\lambda_3}{\lambda_3'} = \frac{\lambda_5}{\lambda_5'}$$, but they leave out $$\lambda_4$$ entirely. So all $$5$$-tuples $$\lambda$$ with $$(\lambda_1, \lambda_2, \lambda_3, \lambda_4, \lambda_5) = (A \lambda_1', A\lambda_2', A\lambda_3', B\lambda_4', A\lambda_5')$$ satisfy these $$5$$ equations, and if $$A(\lambda_1'+\lambda_2'+\lambda_3'+\lambda_5') + B \lambda_4'= 1$$, then $$\lambda_1 + \dots + \lambda_5 = 1$$ also holds. For any $$0 < A < \frac{1}{\lambda_1'+\lambda_2'+\lambda_3'+\lambda_5'}$$, we can set $$B = \frac{1 - A(\lambda_1'+\lambda_2'+\lambda_3'+\lambda_5')}{\lambda_4'}$$ and get a valid solution this way.
1. For every $$i = 2,\dots,K-1$$, either $$\lambda_1 \lambda_i' = \lambda_1'\lambda_i$$ or $$\lambda_i\lambda_K' = \lambda_i'\lambda_K$$ is an equation, forcing $$\frac{\lambda_i}{\lambda_i'}$$ to be equal to either $$\frac{\lambda_1}{\lambda_1'}$$ or $$\frac{\lambda_K}{\lambda_K'}$$.
2. To get $$K$$ equations, we need to either include both such equations for some $$i$$, or else include the equation $$\lambda_1 \lambda_K' = \lambda_1'\lambda_K$$. In either case, we can conclude that $$\frac{\lambda_1}{\lambda_1'} = \frac{\lambda_K}{\lambda_K'}$$. Therefore all $$\frac{\lambda_i}{\lambda_i'}$$ are equal.
|
2019-08-19 20:54:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 68, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999109506607056, "perplexity": 97.81311280843182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314959.58/warc/CC-MAIN-20190819201207-20190819223207-00082.warc.gz"}
|
https://socratic.org/questions/given-the-first-term-and-the-common-dimerence-er-an-arithmetic-sequence-how-do-y-2
|
# Given the first term and the common dimerence er an arithmetic sequence how do you find the 52nd term and the explicit formula: a_1=12, d=-20?
Dec 7, 2017
${a}_{52} = - 1008$
#### Explanation:
We can write the nth term of the sequence:
${a}_{n} = {a}_{1} + d \cdot \left(n - 1\right)$
So, we have ${a}_{n} = 12 - 20 \cdot \left(n - 1\right)$.
To find the 52nd term we now substitute 52 for n:
${a}_{52} = 12 - 20 \cdot \left(52 - 1\right) =$
$= 12 - 20 \left(51\right) = 12 - 1020 = - 1008$
|
2019-11-17 03:02:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9819767475128174, "perplexity": 1675.5948093957848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668782.15/warc/CC-MAIN-20191117014405-20191117042405-00426.warc.gz"}
|
http://stackoverflow.com/questions/9572314/uploading-files-with-sftp
|
# Uploading files with SFTP
I have successfully uploaded files over ftp, but I now need to do via SFTP. I can successfully connect to the remote server, create a file and write to it, but I am unable to upload an existing file from my local server to the remote server. Is ftp_put not firing with an sftp connection?
My code used to write a file :
//Send file via sftp to server
$strServer = "*****";$strServerPort = "****";
$strServerUsername = "*****";$strServerPassword = "*****";
$csv_filename = "Test_File.csv"; //connect to server$resConnection = ssh2_connect($strServer,$strServerPort);
if(ssh2_auth_password($resConnection,$strServerUsername, $strServerPassword)){ //Initialize SFTP subsystem echo "connected";$resSFTP = ssh2_sftp($resConnection);$resFile = fopen("ssh2.sftp://{$resSFTP}/".$csv_filename, 'w');
fwrite($resFile, "Testing"); fclose($resFile);
}else{
echo "Unable to authenticate on server";
}
Has anyone had any success in grabbing a local file and uploading via a method such as above with sftp? An example would be greatly appreciated.
Thanks
-
add comment
## 3 Answers
With the method above (involving sftp) you can use stream_copy_to_stream:
$resFile = fopen("ssh2.sftp://{$resSFTP}/".$csv_filename, 'w');$srcFile = fopen("/home/myusername/".$csv_filename, 'r');$writtenBytes = stream_copy_to_stream($srcFile,$resFile);
fclose($resFile); fclose($srcFile);
You can also try using ssh2_scp_send
-
That worked perfectly, thank you. I shall bookmark for future projects! – Marc Mar 5 '12 at 20:41
add comment
Personally, I prefer avoiding the PECL SSH2 extension. My preferred approach involves phpseclib, a pure PHP SFTP implementation. eg.
<?php
include('Net/SFTP.php');
$sftp = new Net_SFTP('www.domain.tld'); if (!$sftp->login('username', 'password')) {
exit('Login Failed');
}
$sftp->put('remote.ext', 'local.ext', NET_SFTP_LOCAL_FILE); ?> One of the big things I like about phpseclib over the PECL extension is that it's portable. Maybe the PECL extension works on one version of Linux but not another. And on shared hosts it almost never works because it's hardly ever installed. phpseclib is also, surprisingly, faster. And if you need confirmation that the file uploaded you can use phpseclib's built-in logging as proof. - add comment Sharing further inputs, found ssh2_scp_send was not copying properly (bytes were different) when the copying the file from Linux (64 bit) to Windows (32 bit) , there sftp routine worked perfectly. When using Windows with stfp, the path in case of C:\to\path needs to be put as ssh2.sftp://{$resSFTP}/cygdrive/c/to/path if Cygwin is used for SSH on the Windows box.
-
add comment
|
2014-03-08 06:25:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5028038620948792, "perplexity": 9729.202587236095}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999653644/warc/CC-MAIN-20140305060733-00067-ip-10-183-142-35.ec2.internal.warc.gz"}
|
http://zelaron.com/forum/showpost.php?s=ca6546c0bb0f72800d2e6b4a9d4e61a2&p=708769
|
View Single Post
Posted 2018-12-06, 10:49 AM in reply to Chruser's post starting "Well, to quote some..."
Chruser said: [Goto]
For the example above, $\reverse D$ seems to be the operation "differentiate, divide by x, integrate" (with some minor technical tweaks), but it's probably more complicated than that in general...
Actually, this seems to be it (modulo some minor technical stuff). My previous post was getting a bit lengthy, so I'll post some calculations in here instead.
We have that
$\reverse f_{g(n)}(x) = -\sum_{n=1}^\infty\frac{x^{g(n)}}{g(n)},$
so
$\reverse \frac{f'_{g(n)}(x)}{x} = -\sum_{n=1}^\infty x^{g(n)-2}.$
Consequently, by integrating both sides of the equation above and dropping the constant of integration, and by assuming that the order of integration and summation is interchangeable, we see that
$\reverse \int\frac{f'_{g(n)}(x)}{x}\,\mathrm{d}x = -\sum_{n=1}^\infty \int x^{g(n)-2}\,\mathrm{d}x = -\sum_{n=1}^\infty\frac{x^{g(n)-1}}{g(n)-1} = f_{g(n)-1}(x).$
In other words, a differential operator $\reverse D$ (in the fractional calculus sense, where integration is considered differentiation of order -1) such that $\reverse D(f_{g(n)}(x)) = f_{g(n)-1}(x)$ is
$\reverse D = \int x^{-1}\frac{\mathrm{d}}{\mathrm{d}x}.$
I can't say that I've seen it previously, but I like its symmetry...
"Stephen Wolfram is the creator of Mathematica and is widely regarded as the most important innovator in scientific and technical computing today." - Stephen Wolfram
Last edited by Chruser; 2018-12-10 at 12:32 PM.
Profile PM WWW Search
Chruser
|
2020-01-17 16:42:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 7, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8514523506164551, "perplexity": 989.256841378373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589861.0/warc/CC-MAIN-20200117152059-20200117180059-00121.warc.gz"}
|
http://arcana.wikidot.com/black-hole
|
Black hole
rating: +1
Basic Information
A black hole is an astronomical body whose gravitational pull is so strong that even light will be permanently drawn in if it comes within a certain distance. The escape velocity of the black hole is greater than the speed of light. The light (or any object traveling at or very close to the speed of light) either enters an orbit around the black hole, or gets "sucked in", depending on the direction the light was traveling relative to the black hole.
Any matter that crosses this orbital threshold, known as the event horizon, will be torn apart into elementary particles and irrevocably lost. Since light cannot escape, black holes cannot be seen directly but must instead be inferred from the effect they have on their surroundings. General relativity predicts that all of the mass of a black hole is concentrated in a single mathematical point, called a singularity.
The concept of a star so massive that it's own light could not escape was first proposed by John Michell in 1783, but at the time it was such an outlandish concept that the scientific community failed to embrace it. In 1916, physicist Karl Schwarzchild discovered that Albert Einstein's Theory of Relativity had solutions that allowed for light to get stuck in an orbit or collapse into a "hole" in spacetime. Over the decades that have since passed, the scientific consensus has gradually shifted to accepting that Black Holes almost certainly exist within our Universe.
A black hole is created if a mass if compressed to within its Schwarzschild Radius $\left(r_s=\frac{2Gm}{c^2}\right)$, which will also determine its event horizon. For the Earth, that would mean compressing it to the size of a peanut. However, it is predicted that a sufficiently massive star will go through a gravitational collapse to within its Schwarzschild radius after it burns out. Michell's math suggested it would take a star 500 times as massive as our sun, but more recent work suggests it's a possible end-stage for a star just 10 to 50 times the mass of our sun. Point being, most black holes are actually going to be much smaller than you might imagine. After all, they are collapsed stars. What's special about them isn't the amount of mass they have, it's that this mass is compacted into a very tiny area. If a star the size of our sun managed to collapse to the point of being a black hole, its event horizon would only be about 6 km in diameter.
Black Holes cannot currently be directly observed, since they swallow up the light and radiation we would use to see them. It would be possible to detect distortions where light is bent by some unseen object of great mass. Determining whether such a distortion was from a genuine black hole, as opposed to distortion caused by a Massive Compact Halo Object or other hypothetical form of Dark Matter would probably be difficult and dangerous. As such, a Black Hole might serve as a navigational hazard in a role-playing game.
Flying Into A Black Hole
In most settings, we can assume there will be some way of detecting and avoiding a black hole. Established space lanes and commonly traveled routes will avoid black holes and suspected black holes. The ships' computer might analyze visual data from the sensors, compare them to star charts, and compute likely gravitational distortions in the vicinity. With superscience or applied phlebotinum you might be able to build a sensor that accurately detects the warping of spacetime that occurs near a black hole.
So, black holes should really only be a danger to vessels exploring remote areas of space, or to ships that for whatever reason have lost their navigational sensors, computers, or star charts. For the sake of drama, though, we'll assume you've somehow bumbled upon a black hole without warning. It's not going to look like a big area of darkness as you might expect, at least not until you get very close. Instead, light from objects far beyond it will be bent, so you'll actually see behind the black hole. Objects on the other side of it will be distorted, but if they're just distant stars the distortions might not be immediately obvious. At near ranges to the event horizon, you'd be able to notice the big black spot, but the relativistic speeds used by most fictional space propulsion systems may well mean you have very little time to react.
At the Event Horizon, light is in orbit. As you cross that horizon, if you look perpendicular to your motion, you'll see light that has circled around. If the Schwarzschild Radius is small enough, you may actually see yourself in the distance, or even a chain of yourselves, as light runs circuits around similar to what happens on a hypersphere. Of course, if the black hole is large, the diameter of the event horizon may be so long that you and your spacecraft are tiny specks. One dramatic element the GM could narrate in would be the ship's automated proximity alarm going off, as the computer suddenly detects several other ships. This phenomena would alert you to the fact that you'd reached the Event Horizon, but it'd be too late to escape unless you have an FTL Stardrive. Unless you can travel Faster Than Light, the best you could do at this point is establish an orbit of the black hole. Again, the likely speed your craft was traveling at, and the relatively narrow band of the event horizon, means you're not likely to be in the orbital plane for long.
As your orbit decays and you fall into the black hole, things start getting distorted. This is going to be a nightmare, but at least you won't be staring a multiples of yourself any more. You'll be stretched in the direction of the black hole, and squished in the directions perpendicular to that. The parts of you closer to the singularity will feel much greater gravity, and start falling inward faster than the parts of you that are further from it. This stretching is called "spaghettification", but it'll actually look more like what happens when you stretch a broken rubber band. As you get longer, your midsection gets thinner.
Objects closer to, or further from, the singularity would get dimmer, and the light coming from those directions would have a reddish tone. This is the result of red shift, the phenomenon of the light taking longer to reach you. Looking perpendicular to your velocity, everything would be brighter, and have a slight bluish tone. This is the result of blue shift, the phenomenon of light taking less time to reach you. Your view would become distorted to include more of the plane perpendicular to the radius of the black hole, and our binocular vision would have serious trouble interpreting the distorted space. It would appear that the universe had been flattened into a plane perpendicular to your motion. These distortions would eventually become so strong that you would die, and then be crushed into the singularity. You'd be ripped apart on one axis, and crushed on the others.
Time would dilated as you accelerate into the black hole, but you'd be unable to notice. If your spacecraft were very large, you might be able to notice things ahead of you slowing down and stretching. It probably depends on the size of your ship, the size of the black hole, and the speed of your approach.
An outside observer watching your approach on the Schwarzschild Radius would see a very interesting display, though. First, you'd start to fade - this is the redshift effect again, as light coming from you takes longer to travel the distance to them. You'd appear to stretch. Your travel toward the center of the black hole would speed up, but the dilation of time would mess up their perception of it, and your other motions would seem to slow down. Eventually, as you cross the event horizon, you'd vanish from sight completely, as the light bouncing off of you would get stuck on that side of the Event Horizon.
Tropes over science
It's extremely rare that TV or movies depict a black hole at all realistically.
Gravity Sucks
Space Does Not Work That Way
Unrealistic Black Hole
You Fail Astronomy Forever
Sources
Bibliography
6. Non-Fiction Book: Hyperspace by Michio Kaku
7. Curious About Astronomy - page on the size of black holes
Game and Story Use
• A black hole may actually be one side of a wormhole, leading to any other place in space and/or time, or even to a different dimension or parallel universe. Of course, there's the matter of creating a wide enough opening and of somehow passing through without being utterly destroyed in the process.
• Also, this is probably a one-way trip, and with no idea what's on the other side.
• Your best odds of surviving entry into a blackhole-wormhole are if its a rotating black hole. See that entry for more details.
• Related theories:
• White Hole a mirror of the black hole which can theoretically exist paired up with a black hole.
• Fecund Universes theory - suggests everytime a black hole is formed, it creates a new universe within it.
• The sheer mass of a typical black hole makes its gravitational pull so strong that spaceships cannot safely go anywhere near it. Thus, a well-placed black hole can be an important strategic factor.
• The ability to create black holes can make for an impressive weapon. These can run the range from cannons firing micro-black holes that dissipate soon after taking a chunk out of enemy ships to planet- or system destroying weapons.
• If used carelessly or ruthlessly, such weapons can cause collateral damage on an unimaginable scale.
• A doomsday cult may try to cause the end of the world by creating an artificial black hole.
• A spacefaring civilization may use a black hole as a dump site.
• If it's actually one end of a wormhole, this may generate some unwanted attention…
page revision: 17, last edited: 19 Aug 2009 22:20
|
2017-10-17 00:14:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4983029365539551, "perplexity": 626.7959766597419}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820487.5/warc/CC-MAIN-20171016233304-20171017013304-00588.warc.gz"}
|
https://homework.cpm.org/category/CC/textbook/cca2/chapter/8/lesson/8.2.1/problem/8-71
|
### Home > CCA2 > Chapter 8 > Lesson 8.2.1 > Problem8-71
8-71.
Explain why $i^{3} = –i$. What does $i^{4}$ equal?
$i ^{3} = i ^{2} · i$. What does $i ^{2}$ equal?
How can you rewrite $i ^{4}$ as a product of $i ^{2}$ and/or $i$ factors?
|
2020-12-01 18:18:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 7, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5336148738861084, "perplexity": 2376.678863377428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141681209.60/warc/CC-MAIN-20201201170219-20201201200219-00307.warc.gz"}
|
https://k08.chatzi.org/slides/multi-way-search-trees/
|
# Multi-Way Search Trees
K08 Δομές Δεδομένων και Τεχνικές Προγραμματισμού
Κώστας Χατζηκοκολάκης
## Motivation
• We keep the ordering idea of BSTs
• Fast search, by excluding whole subtrees
• And add more than two children for each node
• Gives more flexibility in restructuring the tree
• And news ways to keep it balanced
## Multi-way search trees
• $d$-node: a node with $d$ children
• Each internal $d$-node stores $d-1$ ordered values $k_1 < \ldots < k_{d-1}$
• No duplicate values in the whole tree
• All values in a subtree lie in-between the corresponding node values
• For all values $l$ in the $i$-th subtree: $k_{i-1} < l < k_i$
• Convention: $k_0 = -\infty, k_d = +\infty$
• $m$-way search tree: all nodes have at most $m$ children
• A BST is a 2-way search tree
## Example multi-way search tree
$m=3$
## Searching in a multi-way search tree
• Simple adaptation of the algorithm for BSTs
• Start from the root, traverse towards the leaves
• In each node, there is a single subtree that can possibly contain a value $l$
• The subtree $i$ such that $k_{i-1} < l < k_i$
• Continue in that subtree
## Search for value 12
Unsuccessful search
## Insertion in a multi-way search tree
• Again, simple adaptation of BSTs
• But: we don't always need to create a new node
• We can insert in an existing one if there is space
• Start with a search for the value $l$ we want to insert
• If found, stop (no duplicates)
• If full, create an $i$-th child, such that $k_{i-1} < l < k_i$
## Insert value 28
$m$ = 3
## Deletion from a multi-way search tree
Left as an exercise.
## Complexity of operations
• We need to traverse the tree from the root to a leaf
• The time spent at each node is constant
• Eg. find $i$ such that $k_{i-1} < l < k_i$
• Assuming $m$ is fixed!
• So as usual all complexities are $O(h)$
• $O(n)$ in the worst-case
## Balanced multi-way search trees
• Similarly to BSTs we need to keep the tree balanced
• So that $h = O(\log n)$
• AVL where a kind of balanced BSTs
• We will study two kinds of balanced multi-way search trees:
• 2-3 trees
• 2-3-4 trees (also known as (2,4) trees)
## 2-3 trees
• A 2-3 tree is a 3-way search tree which has the following properties
• Size property
• Each node contains 1 or 2 values
(so each internal node contains 2 or 3 children)
• Depth property
• All leaves have the same depth (lie on the same level)
## Height of 2-3 trees
• All nodes at all levels except the last one are internal
• And each internal node has at least 2 children
• So at level $i$ we have at least $2^i$ nodes
• Hence $n \ge 2^h$, in other words $h = O(\log n)$
• So we can search for an element in time $O(\log n)$
• Using the standard algorithm for $m$-way trees
## Insertion in 2-3-trees
• We can start by following the generic algorithm for $m$-way trees
• Search for the value $l$ we want to insert
• If found, stop (no duplicates)
## Insertion in 2-3-trees
• But what if there is no space at the leaf (overflow)?
• The standard algorithm will insert a child at the leaf
• But this violates the depth property!
• The new leaf is not at the same level
• Different strategy
• split the overflowed node into two nodes
• pass the middle value to the parent (separator of the two nodes)
• The middle value might overflow the parent
• Same procedure: split and send the middle value up
## Example: insert M
M overflows this node.
## Example: insert R
R is inserted in the node with Q where there is space.
## Insertion in 2-3-trees
• The root might also overflow
• Same procedure
• Split it
• The middle value moves up, creating a new root
• This is the only operation that increases the tree's height
• It increases the depth of all nodes simultaneously
• 2-3-trees grow at the root, not at the leaves!
## Example: insert S
S overflows this node
## Complexity of insertion
• We traverse the tree
• From the root to a leaf when searching
• From the leaf back to the root while splitting
• Each split takes constant time
• We do at most $h+1$ of them
• So in total $O(h) = O(\log n)$ steps
• Recall, the tree is balanced
## (2,4) trees
• A (2,4) tree (or 2-3-4 tree) is a 4-way search tree with 2 extra properties
• Size property
• Each node contains between 1 and 3 values
(so each internal node contains between 2 and 4 children)
• Depth property
• All leaves have the same depth (lie on the same level)
• Such trees are balanced
• $h = O(\log n)$
• Proof: exercise
## Insertion in (2,4) trees
• Same as for 2-3-trees
• Search for the value
• Insert at a leaf
• In case of an overflow (5-node)
• Split it into a 3-node and a 2-node
• Move the separator value $k_3$ to the parent
## Example
Inserted 11, 13 and 14.
## Complexity
• Same as for 2-3-trees
• At most $h$ splits
• Each split is constant time
• $O(\log n)$
• Because the tree is balanced
## Removal in (2,4) trees
• To remove a value $k_i$ from an internal node
• Replace with its predecessor (or its successor)
• Right-most value in the $i$-th subtree
• Similar to the BST case of nodes with two children
• To remove a value from a leaf
• We simply remove it
• But it might viotalate the size property (underflow)
## Fixing underflows
Two strategies for fixing an underlow at $\nu$
• Is there an immediate sibling $w$ with a “spare” value? (2 or 3 values)
• If so, we do a transfer operation
• Move a value of $w$ to its parent $u$
• Move a value of the parent $u$ to $\nu$
• If not, we do a fusion operation
• Merge $\nu$ and $w$, creating a new node $\nu'$
• Move a value from the parent $u$ to $\nu'$
• This might underflow the parent, continue the same procedure there
|
2020-10-21 01:31:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5586715340614319, "perplexity": 3586.502025143769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874637.23/warc/CC-MAIN-20201021010156-20201021040156-00679.warc.gz"}
|
http://math.stackexchange.com/questions/243667/maximalization-of-a-cubic-puzzle
|
# Maximalization of a cubic puzzle
What is the maximal volume of a post package of length $L$, width $W$ and height $H$, subject to the following restrictions:
• $L+W+H \leq 90$
• $L \leq 60$, $W \leq 60$, $H \leq 60$
Intuitively I would say $30^3$, but how do I find the solution mathematically?
Taking the partial derivatives: $\ V_L=WH=V_W=LH=V_H=LW=0 =>L=W=H$
And therefore the maximum volume is: $V=(90/3)^3=30^3$. However, here I have assumed that the maximum volume occurs for $L+W+H=90$,
How do I prove this?
I have also not incorporated the $L \leq 60$, $W \leq 60$, $H \leq 60$ restrictions.
-
Use the Arithmetic Mean Geometric Mean Inequality (AM-GM). If $x_1,x_2,\dots x_n$ are positive, then $$\frac{x_1+x_2+\cdots+x_n}{n}\ge (x_1x_2\cdots x_n)^{1/n},$$ with equality if and only if all the $x_i$ are equal.
For your problem, use the case $n=3$. Because $L+W+H \le 90$, by AM-GM we have $$(LWH)^{1/3}\le \frac{90}{3}=30,$$ with equality iff $L=W=H$. The given additional individual constraints on $L$, $W$, and $H$ make no difference, since they do not interfere with setting $L=W=H=30$.
-
Thanks André Nicolas! It seems that the AM-GM states exactly what my intuition was telling me: "Thus the AM–GM inequality states that only the n-cube has the smallest sum of lengths of edges connected to each vertex amongst all n-dimensional boxes with the same volume." – Andy Nov 24 '12 at 18:16
|
2015-04-18 04:17:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9585049748420715, "perplexity": 163.4227816459215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246633512.41/warc/CC-MAIN-20150417045713-00156-ip-10-235-10-82.ec2.internal.warc.gz"}
|
http://mathoverflow.net/questions/92899/left-adjoint-to-the-forgetful-functor-from-finite-product-categories-to-symmetri?answertab=oldest
|
# Left adjoint to the forgetful functor from finite product categories to symmetric monoidal categories
I recall reading that the forgetful functor $FinProdCat \to SymMonCat$ from categories with finite products and product preserving functors to symmetric monoidal categories and tensor preserving functors has a left adjoint.
To make this precise one has to insert lax, weak or strict in several places -- I am interested in any combination of these (but most in a 2-adjunction between the categories with weakly product, resp. tensor, preserving functors).
Is something like this true at all? If so, can anyone give a true and precise statement and/or a reference? I wouldn't mind getting a concrete description of the left adjoint, but a confirmation of its existence would already be a treat.
Thanks!
(This is not a case of google laziness: I spent half a day looking for reference. I would imagine that the statement emerges after inserting the right things into long known results about enriched base change or 2-monads, but I wasn't able to find the right one)
EDIT: What would be nice would be an argument along these lines: Both Symmetric monoidal categories and finite product categories are algebras for certain pseudomonads. Algebras for pseudomonads are are finite copower preserving functors from Cat-enriched Lawvere theories, see Power's Enriched Lawvere Theories, Thm 3.4. There should be a map (sort of an inclusion, since we demand less structure for a symmetric monoidal category) from the Lawvere theory for symmetric monoidal categories to that for finite product categories and the forgetful functor should be precomposition with it. Now the left adjoint could be obtained by taking Cat-enriched left Kan extensions along this map.
One problem is that I only know that left Kan extensions of product preserving functors along product preserving functors are product preserving again, but I don't know the corresponding statement for copowers. This could either be true or for our special Lawvere theories it could be enough to ask for product preserving functors, then the above might have a chance to work.
-
By the way: I don't mean the right adjoint to the forgetful functor - for this see e.g. this article of Hyland and Power: Symmetric Monoidal Sketches dpmms.cam.ac.uk/~martin/Research/Oldpapers/hp00.pdf – Peter Arndt Apr 2 '12 at 14:23
@Peter: Thank you very much for pointing me to this article; in the last days I wondered if there is a theory of "generators and relations" for symm. mon. cat.; see also mathoverflow.net/questions/92897. Sorry for this offtopic comment. But even in this case I would first observe that every symmetric monoidal category defined by generators and relations should be mapped to the corresponding finite product category with the same generators and relations. For example, the permutation groupoid gets mapped to the discrete category $\mathbb{N}$; both are free on one generator. – Martin Brandenburg Apr 2 '12 at 14:34
If you add enough "niceness" conditions on a category, a fantastic way to improve a symmetric monoidal category to one where the tensor product is given by cartesian product is to consider the category of cocommutative coalgebras in your original category. But I think this construction is right-adjoint to forgetting from cartesian to symmetric monoidal. In order for a left adjoint to exist, it had better be true that any (2-)limit of finite product categories is also the limit of underlying symmetric monoidal categories. How confident are you in this? – Theo Johnson-Freyd Apr 2 '12 at 16:23
@Martin: Good thought! The presentations and relations should help me compute the left adjoint in my cases. For the permutation groupoid it should give the opposite of the category of finite sets, though. – Peter Arndt Apr 2 '12 at 17:52
@Theo: Yes, that is the - admittedly very neat - right adjoint. It seems to be true that the forgetful functor preserves limits, so I'm keeping my fingers crossed. – Peter Arndt Apr 2 '12 at 17:53
The 2-categories of finite-product categories and symmetric-monoidal categories are both 2-monadic over $\mathrm{Cat}$, in all possible senses. That is, there are strict 2-monads $P$ and $S$ on $\mathrm{Cat}$ such that $P$-algebras and strict, pseudo, lax, and colax $P$-morphisms coincide with finite-product categories and their morphisms, and likewise for $S$-algebras and symmetric-monoidal categories. Moreover, both $P$ and $S$ are finitary, and we have a 2-monad morphism $S\to P$ which induces your forgetful functor(s) on categories of algebras.
First consider the strict case: (strict) limits of both $P$-algebras and $S$-algebras (and strict morphisms) are created in $\mathrm{Cat}$, so the forgetful functor preserves strict limits. Thus, since the 2-categories of $P$- and $S$-algebras and strict morphisms are locally presentable, this functor has a (strict) left adjoint.
The pseudo case requires some more work. One could in theory mimic the above argument entirely in the world of 2-categories, but I don't know if the machinery has been set up for that yet. Alternatively, one can use pseudo-morphism classifiers to reduce the problem to the strict case. This is done in the classic Blackwell-Kelly-Power paper "Two-dimensional monad theory", Theorem 5.12. (They state the strict case as Theorem 3.9.)
I have no idea whether the lax or colax cases are true; offhand I would say it seems unlikely.
-
Thank you! (I wrote the edit to my question before seeing your answer - I will look at Blackwell-Kelly-Power now) – Peter Arndt Apr 2 '12 at 17:56
Everytime I encounter 2-monad theory (and various special cases thereof) I don't understand how one can make any of these general constructions explicit. Even for me, and I like category theory a lot, it is still impossible to understand any paper in this area. I wonder why. – Martin Brandenburg Apr 2 '12 at 18:16
So let me pose this specific question: Even if there is some fancy theory which yields the existence of the left adjoint, how can we write it down? I mean something explicit which is not defined by generators and relations. For example, the right adjoint is explicit, it is just $C \mapsto \mathrm{CMon}(C)$. – Martin Brandenburg Apr 2 '12 at 18:19
@Martin: Unfortunately, frequently adjoint functors don't have very nice descriptions. Look at a proof of the adjoint functor theorem; in many cases that's as explicit of a "construction" as you're going to get. In the locally presentable case, the adjoint can usually be defined as a certain transfinite colimit; maybe you'd like that better? – Mike Shulman Apr 2 '12 at 19:29
Thm. 5.12 did it for me - thanks a lot. – Peter Arndt Apr 2 '12 at 22:43
|
2015-07-01 12:07:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9403356313705444, "perplexity": 375.41807076336636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094924.48/warc/CC-MAIN-20150627031814-00246-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://docs.sqream.com/en/latest/guides/operations/information_for_support.html
|
Gathering information for SQream support¶
SQream Support is ready to answer any questions, and help solve any issues with SQream DB.
Getting support and reporting bugs¶
When contacting SQream Support, we recommend reporting the following information:
• What is the problem encountered?
• What was the expected outcome?
• How can SQream reproduce the issue?
When possible, please attach as many of the following:
• Error messages or result outputs
• DDL and queries that reproduce the issue
• Log files
• Screen captures if relevant
How SQream debugs issues¶
Reproduce¶
If we are able to easily reproduce your issue in our testing lab, this greatly improves the speed at which we can fix it.
Reproducing an issue consists of understanding:
1. What was SQream DB doing at the time?
2. How is the SQream DB cluster configured?
3. How does the schema look?
4. What is the query or statement that exposed the problem?
5. Were there any external factors? (e.g. Network disconnection, hardware failure, etc.)
See the Collecting a reproducible example of a problematic statement section ahead for information about collecting a full reproducible example.
Logs¶
The logs produced by SQream DB contain a lot of information that may be useful for debugging.
Look for error messages in the log and the offending statements. SQream’s support staff are experienced in correlating logs to workloads, and finding possible problems.
See the Collecting logs and metadata database section ahead for information about collecting a set of logs that can be analyzed by SQream support.
Fix¶
Once we have a fix, this can be issued as a hotfix to an existing version, or as part of a bigger major release.
Your SQream account manager will keep you up-to-date about the status of the issue.
Collecting a reproducible example of a problematic statement¶
SQream DB contains an SQL utility that can help SQream support reproduce a problem with a query or statement.
This utility compiles and executes a statement, and collects the relevant data in a small database which can be used to recreate and investigate the issue.
SQL Syntax¶
SELECT EXPORT_REPRODUCIBLE_SAMPLE(output_path, query_stmt [, ... ])
;
output_path ::=
filepath
Parameters¶
Parameter
Description
output_path
Path for the output archive. The output file will be a tarball.
query_stmt [, ...]
Statements to analyze.
Example¶
SELECT EXPORT_REPRODUCIBLE_SAMPLE('/home/rhendricks', 'SELECT * FROM t', $$SELECT "Name", "Team" FROM nba$$);
SQream DB comes bundled with a data collection utility and an SQL utility intended for collecting logs and additional information that can help SQream support drill down into possible issues.
Examples¶
Write an archive to /home/rhendricks, containing log files:
SELECT REPORT_COLLECTION('/home/rhendricks', 'log')
;
Write an archive to /home/rhendricks, containing log files and metadata database:
SELECT REPORT_COLLECTION('/home/rhendricks', 'db_and_log')
;
Using the command line utility:
\$ ./bin/report_collection /home/rhendricks/sqream_storage /home/rhendricks db_and_log
|
2021-01-25 23:14:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1718963235616684, "perplexity": 7870.3098735652175}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704792131.69/warc/CC-MAIN-20210125220722-20210126010722-00252.warc.gz"}
|
https://nemeth.aphtech.org/lesson6.3
|
- Use 6 Dot Entry Switch to UEB Math Tutorial
Lesson 6.3: Brackets and Braces
Symbols
$\text{[ left bracket}$
⠈⠷
$\text{] right bracket}$
⠈⠾
$\mathbf{\left[}\phantom{\rule{.3em}{0ex}}\text{left bold bracket}$
⠸⠈⠷
$\mathbf{\right]}\phantom{\rule{.3em}{0ex}}\text{right bold bracket}$
⠸⠈⠾
$\text{{ left brace}$
⠨⠷
$\text{} right brace}$
⠨⠾
$\text{∅ null set}$
⠸⠴
Brackets
In braille, brackets are variations of parentheses. The opening and closing brackets are formed by adding a dot four in the cell preceding: the basic opening symbol, dots one two three five six; and the basic closing symbol, dots two three four five six. Boldface brackets are represented by the boldface typeform indicator, dots four five six, preceding the bracket symbol, making it a three-cell configuration. Nemeth code brackets are always to be used, even if the brackets enclose literary expressions within a mathematical context.
• Opening, left, bracket, dot four dots one two three five six
• Closing, right, bracket, dot four dots two three four five six
• Boldface, left, bracket, dots four five six dot four dots one two three five six
• Boldface, right, bracket, dots four five six dot four dots two three four five six
• Use of Brackets
Brackets are used to group or enclose other signs of grouping to show a higher level of grouping. They also serve many of the same purposes as parentheses when they are used at this second level of grouping. In interval notation, they can be used to show that a range of values includes a certain value.
Braces
Braille braces are two-cell symbols that are modifications of the opening and closing parentheses. The braces are formed by preceding the configurations used for parentheses with dots four six. The opening, left, brace is comprised of dots four six dots one two three five six; the closing, right, brace, dots four six dots two three four five six.
Braces enclosing sets
Braces are often used to indicate set notation. Such a set may include numeric values, letters, or words. Frequently, a set is designated by a capital letter followed by an equals sign. Following the equals sign is a set of braces which enclose the elements of the set.
Facing Braces-The Empty Set
The empty set in math indicates that no elements can be found that match the conditions for the set. It is often shown with two braces not containing any elements or with a zero and a slash or vertical line through it. When braces are used to represent the empty set, a space is inserted between the braces to indicate that the set does not have anything in it. This is not the same as an omitted item. For example, "the set of all cats which have been formally trained to guide blind persons" equals the empty set. This is because, as of this writing, there have not been any cats who have been formally trained to guide people who are blind. The space between the braces shows that nothing fulfills the requirement.
Punctuation with Brackets and Braces
Brackets and braces are mathematical symbols, even when they are used to enclose literary material. They are to be punctuated as mathematical symbols. The punctuation indicator is used with all marks of punctuation other than the mathematical comma, hyphen, and dash.
Special Rules for Brackets
Do not use the numeric indicator immediately following the opening bracket indicator except when the brackets are used to show a matrix. See rules for matrices.
Punctuate as mathematical symbols.
Brackets follow the general rules for signs of grouping.
Special Rules for Braces
In braille, braces should appear in the same position as they do in print.
The following may not be used in direct contact with a sign of grouping, such as the brace: one-cell whole-word alphabet contractions, lower-cell whole word contractions, or any of the whole or part-word contractions, such as and, for, of, the, or with.
Use or non-use of the English letter indicator depends upon whether or not it would be required if the braces were not present. In lists of items separated by commas, it is not required.
The numeric indicator is not used with numerals in an enclosed list or when numerals are in contact with both signs of grouping.
Punctuate braces as mathematical symbols.
Example 1
$\left[7+\left(10÷2\right)\right]$
⠈⠷⠶⠬⠷⠂⠴⠨⠌⠆⠾⠈⠾
Example 2
Bold face brackets
$\mathbf{\left[}2.1\mathbf{\right]}=2$
⠸⠈⠷⠆⠨⠂⠸⠈⠾⠀⠨⠅⠀⠼⠆
Example 3
$\left[-5,\phantom{\rule{.3em}{0ex}}2\right)$
⠈⠷⠤⠢⠠⠀⠆⠾
Example 4
$\left[-5,\phantom{\rule{.3em}{0ex}}2\right]$
⠈⠷⠤⠢⠠⠀⠆⠈⠾
Example 5
$B=\left\{-1,\phantom{\rule{.3em}{0ex}}2,\phantom{\rule{.3em}{0ex}}2.4\right\}$
⠠⠃⠀⠨⠅⠀⠨⠷⠤⠂⠠⠀⠆⠠⠀⠆⠨⠲⠨⠾
Example 6
The vertical bar is a sign of comparison and should be brailled according to those rules.
$A=\left\{x|x=3y\right\}$
⠠⠁⠀⠨⠅⠀⠨⠷⠭⠳⠭⠀⠨⠅⠀⠼⠒⠽⠨⠾
Example 7
$\left\{5,\phantom{\rule{.3em}{0ex}}-1,\phantom{\rule{.3em}{0ex}}2,\phantom{\rule{.3em}{0ex}}r,\phantom{\rule{.3em}{0ex}}p,\phantom{\rule{.3em}{0ex}}B\right\}$
⠨⠷⠢⠠⠀⠤⠂⠠⠀⠆⠠⠀⠗⠠⠀⠏⠠⠀⠠⠃⠨⠾
Example 8
$\left\{0\right\}$
⠨⠷⠴⠨⠾
Example 9
$\left\{\right\}\phantom{\rule{.3em}{0ex}}\text{or}\phantom{\rule{.3em}{0ex}}\varnothing$
⠨⠷⠀⠨⠾⠀⠕⠗⠀⠸⠴
|
2021-10-16 00:34:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 16, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6799682378768921, "perplexity": 2111.2007927635323}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00334.warc.gz"}
|
https://electronics.stackexchange.com/questions/238222/can-a-system-with-a-response-yx-ax-c-be-called-linear?noredirect=1
|
# Can a system with a response Y(x) = ax + c be called linear?
I made a DC-DC converter its ideal transfer function is: $$V_0(D) = DV_i$$
I took measurements of its response and then made a linear regression, I got this result:
$$V_0(D) = 0.98DV_i -0.42$$
Quite similar to the expected value. Now, can I say that the system has a linear response?
strictly speaking, the system response should be zero when the input is zero, but in this case, if D = 0:
$$V_0 = -0.42$$
What would be the correct way to refer at the response of this system?
• Right up until it clips, yes. – Ignacio Vazquez-Abrams Jun 5 '16 at 0:39
• Linear is exactly correct. However, it is not proportional, which would only occur if c is zero. That is, the output is not proportional to the input (if you double the input you do not double the output. Of course, you can say that changes in the output are proportional to changes in the input, but this is not (exactly) the same thing. – WhatRoughBeast Jun 5 '16 at 1:04
• You did a regression with $D$ as the independent variable or $V_i$? – The Photon Jun 5 '16 at 2:10
• Possibly related: affine transformation. – The Photon Jun 5 '16 at 2:12
• Strictly speaking that is not a linear relationship between D and $V_o$, it is an affine relationship (see previously commented link). However its very rare that the distinction matters in engineering and you're likely to hear people call it "linear" anyway. – The Photon Jun 5 '16 at 2:28
Strictly speaking that is not a linear relationship between $D$ and $V_o$, it is an affine relationship. However its very rare that the distinction matters in engineering and you're likely to hear people call it "linear" anyway.
• Great, that was exactly what I was looking for. So could I say that Vo(D) = 0.98DVi - 0.42 is an affine model of the system? – Luis Ramon Ramirez Rodriguez Jun 5 '16 at 2:34
• Honestly I think it would be more common to just call it a linear model. – The Photon Jun 5 '16 at 2:41
• I would say that it is misleading to call it linear. When someone says linear I understand T(a + b) = T(a) + T(b) – user110971 Jun 5 '16 at 7:39
• I understand that it is affine, but I really have trouble understanding why it's not just linear, since linearity means y=mx+b. Could you write something about the distinction? – sweber Jun 5 '16 at 7:50
• @sweber, see user110971's comment. That's the definition of a linear system, and it doesn't apply to OP's system. – The Photon Jun 5 '16 at 14:57
Can a system with a response Y(x) = ax + c be called linear?
If two systems, A and B, are linear then for a cascade of the two systems, the order does not matter. That is, AB = BA:
Image credit
For example, let system A be an ideal gain of 10 stage while system B is an ideal 1st order low-pass filter with unity DC gain.
Since both stages are linear, the cascade of the two systems is a low-pass filter with a DC gain of 10 regardless of whether B follows A or A follows B in the cascade.
Now, see that a system with gain and offset is not a linear system. For example, let system A be as before but system B is now unity gain with a constant offset of 1.
For the cascade AB, the output is the input scaled by 10 plus an offset of 1.
However, for the cascade BA, the output is the input scaled by 10 plus an offset of 10 and so system B is not a linear system.
Another definition of linearity is the following: if $y_1$ is the output of a system given input $x_1$ and $y_2$ is the output of the same system given input $x_2$, then given the input $x_3 = a_1x_1 + a_2x_2$, the output is $y_3 = a_1y_1 + a_2y_2$ if and only if the system is linear.
For the case of system B with unity gain and offset 1, we have
$$y_1 = x_1 + 1$$ $$y_2 = x_2 + 1$$ $$y_3 = a_1x_1 + a_2x_2 + 1 \ne a_1y_1 + a_2y_2 = a_1x_1 + a_2x_2 + a_1 + a_2$$
Thus, system B is therefore not a linear system and so the answer to the quoted question is no.
• The equation doesn't satisfy the superposition theorem so isn't linear. There is a very widespread misuse of the word linear in engineering I think – Luis Ramon Ramirez Rodriguez Jun 5 '16 at 3:31
|
2019-06-27 07:14:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7028692364692688, "perplexity": 528.7668874228855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000894.72/warc/CC-MAIN-20190627055431-20190627081431-00223.warc.gz"}
|
http://openstudy.com/updates/507ee579e4b0599919843bff
|
## amorfide Group Title is anyone able to do a quick lesson on integration by parts? one year ago one year ago
1. experimentX Group Title
let $$u$$ and $$v$$ be two functions of x, $\int u(x) \; v(x) \;dx = u(x) \int v(x) dx - \int {d\over dx}u(x) \left [ \int v(x) dx \right ] dx \\= v(x) \int u(x) dx - \int {d\over dx}v(x) \left [ \int u(x) dx \right ] dx$ Rest is pratice!!
2. amorfide Group Title
|dw:1350493812184:dw| is that the integral of the derivative of v(x)? lol
3. experimentX Group Title
try few examples |dw:1350493861755:dw|
4. experimentX Group Title
yeah ... but that is multiplied by another integral ... and dx appears at last.
5. experimentX Group Title
|dw:1350494046804:dw|
6. amorfide Group Title
so in words it would be (second x integral of first) - (integral of (derivative of second x integral of first))
7. experimentX Group Title
yep .. you can interchange first and second though. suit yourself according to the problem.
|
2014-07-31 21:55:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6043420433998108, "perplexity": 4975.163242500724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273676.54/warc/CC-MAIN-20140728011753-00000-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://sources.debian.org/src/texlive-bin/2018.20181218.49446-1/texk/web2c/luatexdir/tex/printing.w/
|
## File: printing.w
package info (click to toggle)
texlive-bin 2018.20181218.49446-1
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093109410951096109710981099110011011102110311041105110611071108110911101111111211131114111511161117111811191120112111221123112411251126112711281129113011311132113311341135 % printing.w % % Copyright 2009-2013 Taco Hoekwater % % This file is part of LuaTeX. % % LuaTeX is free software; you can redistribute it and/or modify it under % the terms of the GNU General Public License as published by the Free % Software Foundation; either version 2 of the License, or (at your % option) any later version. % % LuaTeX is distributed in the hope that it will be useful, but WITHOUT % ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or % FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public % License for more details. % % You should have received a copy of the GNU General Public License along % with LuaTeX; if not, see . @ @c #include "ptexlib.h" #include "lua/luatex-api.h" /* for luatex_banner */ @ @c #define wlog(A) fputc(A,log_file) #define wterm(A) fputc(A,term_out) int new_string_line = 0; int escape_controls = 1; @ Messages that are sent to a user's terminal and to the transcript-log file are produced by several |print|' procedures. These procedures will direct their output to a variety of places, based on the setting of the global variable |selector|, which has the following possible values: \yskip \hang |term_and_log|, the normal setting, prints on the terminal and on the transcript file. \hang |log_only|, prints only on the transcript file. \hang |term_only|, prints only on the terminal. \hang |no_print|, doesn't print at all. This is used only in rare cases before the transcript file is open. \hang |pseudo|, puts output into a cyclic buffer that is used by the |show_context| routine; when we get to that routine we shall discuss the reasoning behind this curious mode. \hang |new_string|, appends the output to the current string in the string pool. \hang 0 to 15, prints on one of the sixteen files for \.{\\write} output. \yskip \noindent The symbolic names |term_and_log|', etc., have been assigned numeric codes that satisfy the convenient relations |no_print+1=term_only|, |no_print+2=log_only|, |term_only+2=log_only+1=term_and_log|. Three additional global variables, |tally| and |term_offset| and |file_offset|, record the number of characters that have been printed since they were most recently cleared to zero. We use |tally| to record the length of (possibly very long) stretches of printing; |term_offset| and |file_offset|, on the other hand, keep track of how many characters have appeared so far on the current line that has been output to the terminal or to the transcript file, respectively. @c alpha_file log_file; /* transcript of \TeX\ session */ int selector = term_only; /* where to print a message */ int dig[23]; /* digits in a number being output */ int tally = 0; /* the number of characters recently printed */ int term_offset = 0; /* the number of characters on the current terminal line */ int file_offset = 0; /* the number of characters on the current file line */ packed_ASCII_code trick_buf[(ssup_error_line + 1)]; /* circular buffer for pseudoprinting */ int trick_count; /* threshold for pseudoprinting, explained later */ int first_count; /* another variable for pseudoprinting */ boolean inhibit_par_tokens = false; /* for minor adjustments to |show_token_list| */ @ To end a line of text output, we call |print_ln| @c void print_ln(void) { switch (selector) { case no_print: break; case term_only: wterm_cr(); term_offset = 0; break; case log_only: wlog_cr(); file_offset = 0; break; case term_and_log: wterm_cr(); wlog_cr(); term_offset = 0; file_offset = 0; break; case pseudo: break; case new_string: if (new_string_line > 0) print_char(new_string_line); break; default: fprintf(write_file[selector], "\n"); break; } /* |tally| is not affected */ } @ The |print_char| procedure sends one byte to the desired destination. All printing comes through |print_ln| or |print_char|, except for the case of |tprint| (see below). The checking of the line length is an inheritance from previosu engines and we might drop it after release 1.0. We're not too picky about the exact match of that length because we have utf output so length is then a bit fuzzy anyway. @c #define needs_escaping(A) \ ((! escape_controls) || (A>=0x20) || (A==0x0A) || (A==0x0D) || (A==0x09)) #define escaped_char(A) \ A+64 #define wterm_char(A) \ if (needs_escaping(A)) { \ wterm(A); \ } else { \ if (term_offset+2>=max_print_line) { \ wterm_cr(); \ term_offset=0; \ } \ wterm('^'); \ wterm('^'); \ wterm(escaped_char(A)); \ term_offset += 2; \ } /* #define needs_wrapping(A,B) \ (((A>=0xF0)&&(B+4>=max_print_line)) || \ ((A>=0xE0)&&(B+3>=max_print_line)) || \ ((A>=0xC0)&&(B+2>=max_print_line))) we have mostly ascii in logs, so ... */ #define needs_wrapping(A,B) \ ( (A>=0xC0) && \ (((A>=0xF0) && (B+4>=max_print_line)) || \ ((A>=0xE0) && (B+3>=max_print_line)) || \ ( (B+2>=max_print_line))) \ ) #define fix_term_offset(A) \ if (needs_wrapping(A,term_offset)){ \ wterm_cr(); \ term_offset=0; \ } #define fix_log_offset(A) \ if (needs_wrapping(A,file_offset)){ \ wlog_cr(); \ file_offset=0; \ } void print_char(int s) { if (s < 0 || s > 255) { formatted_warning("print","weird character %i",s); return; } if (s == new_line_char_par) { if (selector < pseudo) { print_ln(); return; } } switch (selector) { case no_print: break; case term_only: fix_term_offset(s); wterm_char(s); incr(term_offset); if (term_offset == max_print_line) { wterm_cr(); term_offset = 0; } break; case log_only: fix_log_offset(s); wlog(s); incr(file_offset); if (file_offset == max_print_line) { wlog_cr(); file_offset = 0; } break; case term_and_log: fix_term_offset(s); fix_log_offset(s); wterm_char(s); wlog(s); incr(term_offset); incr(file_offset); if (term_offset == max_print_line) { wterm_cr(); term_offset = 0; } if (file_offset == max_print_line) { wlog_cr(); file_offset = 0; } break; case pseudo: if (tally < trick_count) trick_buf[tally % error_line] = (packed_ASCII_code) s; break; case new_string: append_char(s); break; default: fprintf(write_file[selector], "%c", s); } incr(tally); } @ An entire string is output by calling |print|. Note that if we are outputting the single standard ASCII character \.c, we could call |print("c")|, since |"c"=99| is the number of a single-character string, as explained above. But |print_char("c")| is quicker, so \TeX\ goes directly to the |print_char| routine when it knows that this is safe. (The present implementation assumes that it is always safe to print a visible ASCII character.) @^system dependencies@> The first 256 entries above the 17th unicode plane are used for a special trick: when \TeX\ has to print items in that range, it will instead print the character that results from substracting 0x110000 from that value. This allows byte-oriented output to things like \.{\\specials} and \.{\\pdfextension literals}. Todo: Perhaps it would be useful to do the same substraction while typesetting. @c void print(int s) { if (s >= str_ptr) { normal_warning("print","bad string pointer"); return; } else if (s < STRING_OFFSET) { if (s < 0) { normal_warning("print","bad string offset"); } else { /* TH not sure about this, disabled for now! */ if ((false) && (selector > pseudo)) { /* internal strings are not expanded */ print_char(s); return; } if (s == new_line_char_par) { if (selector < pseudo) { print_ln(); return; } } if (s <= 0x7F) { print_char(s); } else if (s <= 0x7FF) { print_char(0xC0 + (s / 0x40)); print_char(0x80 + (s % 0x40)); } else if (s <= 0xFFFF) { print_char(0xE0 + (s / 0x1000)); print_char(0x80 + ((s % 0x1000) / 0x40)); print_char(0x80 + ((s % 0x1000) % 0x40)); } else if (s >= 0x110000) { int c = s - 0x110000; if (c >= 256) { formatted_warning("print", "bad raw byte to print (c=%d), skipped",c); } else { print_char(c); } } else { print_char(0xF0 + (s / 0x40000)); print_char(0x80 + ((s % 0x40000) / 0x1000)); print_char(0x80 + (((s % 0x40000) % 0x1000) / 0x40)); print_char(0x80 + (((s % 0x40000) % 0x1000) % 0x40)); } } return; } if (selector == new_string) { append_string(str_string(s), (unsigned) str_length(s)); return; } lprint(&str_lstring(s)); } void lprint (lstring *ss) { unsigned char *j, *l; /* current character code position */ j = ss->s; l = j + ss->l; while (j < l) { /* 0x110000 in utf=8: 0xF4 0x90 0x80 0x80 */ /* I don't bother checking the last two bytes explicitly */ if ((j < l - 4) && (*j == 0xF4) && (*(j + 1) == 0x90)) { int c = (*(j + 2) - 128) * 64 + (*(j + 3) - 128); assert(c >= 0 && c < 256); print_char(c); j = j + 4; } else { print_char(*j); incr(j); } } } @ The procedure |print_nl| is like |print|, but it makes sure that the string appears at the beginning of a new line. @c void print_nlp(void) { /* move to beginning of a line */ if (new_string_line > 0) { print_char(new_string_line); } else if (((term_offset > 0) && (odd(selector))) || ((file_offset > 0) && (selector >= log_only))) { print_ln(); } } void print_nl(str_number s) { /* prints string |s| at beginning of line */ print_nlp(); print(s); } @ |char *| versions of the same procedures. |tprint| is different because it uses buffering, which works well because most of the output actually comes through |tprint|. @c #define t_flush_buffer(target,offset) \ buffer[i++] = '\n'; \ buffer[i++] = '\0';\ fputs(buffer, target); \ i = 0; \ buffer[0] = '\0'; \ offset=0; void tprint(const char *sss) { char *buffer = NULL; int i = 0; /* buffer index */ int newlinechar = new_line_char_par; int dolog = 0; int doterm = 0; switch (selector) { case no_print: return; break; case term_only: doterm = 1; break; case log_only: dolog = 1; break; case term_and_log: dolog = 1; doterm = 1; break; case pseudo: while (*sss) { if (tally < trick_count) { trick_buf[tally % error_line] = (packed_ASCII_code) *sss++; tally++; } else { return; } } return; break; case new_string: append_string((const unsigned char *)sss, (unsigned) strlen(sss)); return; break; default: { char *newstr = xstrdup(sss); char *s; for (s=newstr;*s;s++) { if (*s == newlinechar) { *s = '\n'; } } fputs(newstr, write_file[selector]); free(newstr); return; } break; } /* what is left is the 3 term/log settings */ if (dolog || doterm) { buffer = xmalloc(strlen(sss)*3); if (dolog) { const unsigned char *ss = (const unsigned char *) sss; while (*ss) { int s = *ss++; if (needs_wrapping(s,file_offset) || s == newlinechar) { t_flush_buffer(log_file,file_offset); } if (s != newlinechar) { buffer[i++] = s; if (file_offset++ == max_print_line) { t_flush_buffer(log_file,file_offset); } } } if (*buffer) { buffer[i++] = '\0'; fputs(buffer, log_file); buffer[0] = '\0'; } i = 0; } if (doterm) { const unsigned char *ss = (const unsigned char *) sss; while (*ss) { int s = *ss++; if (needs_wrapping(s,term_offset) || s == newlinechar) { t_flush_buffer(term_out,term_offset); } if (s != newlinechar) { if (needs_escaping(s)) { buffer[i++] = s; } else { buffer[i++] = '^'; buffer[i++] = '^'; buffer[i++] = escaped_char(s); term_offset += 2; } if (++term_offset == max_print_line) { t_flush_buffer(term_out,term_offset); } } } if (*buffer) { buffer[i++] = '\0'; fputs(buffer, term_out); } } free(buffer); } } void tprint_nl(const char *s) { print_nlp(); tprint(s); } @ Here is the very first thing that \TeX\ prints: a headline that identifies the version number and format package. The |term_offset| variable is temporarily incorrect, but the discrepancy is not serious since we assume that the banner and format identifier together will occupy at most |max_print_line| character positions. @c void print_banner(const char *v) { int callback_id = callback_defined(start_run_callback); if (callback_id == 0) { fprintf(term_out, "This is " MyName ", Version %s%s ", v, WEB2CVERSION); if (format_ident > 0) print(format_ident); print_ln(); if (show_luahashchars){ wterm(' '); fprintf(term_out,"Number of bits used by the hash function (" my_name "): %d",LUAI_HASHLIMIT); print_ln(); } if (shellenabledp) { wterm(' '); if (restrictedshell) fprintf(term_out, "restricted "); fprintf(term_out, "system commands enabled.\n"); } } else if (callback_id > 0) { run_callback(callback_id, "->"); } } @ @c void log_banner(const char *v) { const char *months[] = { " ", "JAN", "FEB", "MAR", "APR", "MAY", "JUN", "JUL", "AUG", "SEP", "OCT", "NOV", "DEC" }; unsigned month = (unsigned) month_par; if (month > 12) month = 0; fprintf(log_file, "This is " MyName ", Version %s%s ", v, WEB2CVERSION); print(format_ident); print_char(' '); print_char(' '); print_int(day_par); print_char(' '); fprintf(log_file, "%s", months[month]); print_char(' '); print_int(year_par); print_char(' '); print_two(time_par / 60); print_char(':'); print_two(time_par % 60); if (shellenabledp) { wlog_cr(); wlog(' '); if (restrictedshell) fprintf(log_file, "restricted "); fprintf(log_file, "system commands enabled."); } if (filelineerrorstylep) { wlog_cr(); fprintf(log_file, " file:line:error style messages enabled."); } } @ @c void print_version_banner(void) { fprintf(term_out, "%s", luatex_banner); } @ The procedure |print_esc| prints a string that is preceded by the user's escape character (which is usually a backslash). @c void print_esc(str_number s) { int c = escape_char_par; /* Set variable |c| to the current escape character */ if (c >= 0 && c < STRING_OFFSET) print(c); print(s); } @ This prints escape character, then |s|. @c void tprint_esc(const char *s) { int c = escape_char_par; /* Set variable |c| to the current escape character */ if (c >= 0 && c < STRING_OFFSET) print(c); tprint(s); } @ An array of digits in the range |0..15| is printed by |print_the_digs|. @c void print_the_digs(eight_bits k) { /* prints |dig[k-1]|$\,\ldots\,$|dig[0]| */ while (k-- > 0) { if (dig[k] < 10) print_char('0' + dig[k]); else print_char('A' - 10 + dig[k]); } } @ The following procedure, which prints out the decimal representation of a given integer |n|, has been written carefully so that it works properly if |n=0| or if |(-n)| would cause overflow. It does not apply |mod| or |div| to negative arguments, since such operations are not implemented consistently by all PASCAL compilers. @c void print_int(longinteger n) { int k = 0; /* index to current digit; we assume that $|n|<10^{23}$ */ longinteger m; /* used to negate |n| in possibly dangerous cases */ if (n < 0) { print_char('-'); if (n > -100000000) { n = -n; } else { m = -1 - n; n = m / 10; m = (m % 10) + 1; k = 1; if (m < 10) dig[0] = (int) m; else { dig[0] = 0; incr(n); } } } do { dig[k] = (int) (n % 10); n = n / 10; incr(k); } while (n != 0); print_the_digs((eight_bits) k); } @ Here is a trivial procedure to print two digits; it is usually called with a parameter in the range |0<=n<=99|. @c void print_two(int n) { n = abs(n) % 100; print_char('0' + (n / 10)); print_char('0' + (n % 10)); } @ Hexadecimal printing of nonnegative integers is accomplished by |print_hex|. @c void print_hex(int n) { int k = 0 ; /* index to current digit; we assume that $0\L n<16^{22}$ */ print_char('"'); do { dig[k] = n % 16; n = n / 16; incr(k); } while (n != 0); print_the_digs((eight_bits) k); } @ Roman numerals are produced by the |print_roman_int| routine. Readers who like puzzles might enjoy trying to figure out how this tricky code works; therefore no explanation will be given. Notice that 1990 yields \.{mcmxc}, not \.{mxm}. @c void print_roman_int(int n) { char *j, *k; /* mysterious indices */ int u, v; /* mysterious numbers */ char mystery[] = "m2d5c2l5x2v5i"; j = (char *) mystery; v = 1000; while (1) { while (n >= v) { print_char(*j); n = n - v; } if (n <= 0) { /* nonpositive input produces no output */ return; } k = j + 2; u = v / (*(k - 1) - '0'); if (*(k - 1) == '2') { k = k + 2; u = u / (*(k - 1) - '0'); } if (n + u >= v) { print_char(*k); n = n + u; } else { j = j + 2; v = v / (*(j - 1) - '0'); } } } @ The |print| subroutine will not print a string that is still being created. The following procedure will. @c void print_current_string(void) { unsigned j = 0; /* points to current character code */ while (j < cur_length) print_char(cur_string[j++]); } @ The procedure |print_cs| prints the name of a control sequence, given a pointer to its address in |eqtb|. A space is printed after the name unless it is a single nonletter or an active character. This procedure might be invoked with invalid data, so it is extra robust.'' The individual characters must be printed one at a time using |print|, since they may be unprintable. @c void print_cs(int p) { str_number t = cs_text(p); if (p < hash_base) { /* nullcs */ if (p == null_cs) { tprint_esc("csname"); tprint_esc("endcsname"); print_char(' '); } else { tprint_esc("IMPOSSIBLE."); } } else if ((p >= undefined_control_sequence) && ((p <= eqtb_size) || p > eqtb_size + hash_extra)) { tprint_esc("IMPOSSIBLE."); } else if (t >= str_ptr) { tprint_esc("NONEXISTENT."); } else { if (is_active_cs(t)) { print(active_cs_value(t)); } else { print_esc(t); if (single_letter(t)) { if (get_cat_code(cat_code_table_par, pool_to_unichar(str_string(t))) == letter_cmd) print_char(' '); } else { print_char(' '); } } } } @ Here is a similar procedure; it avoids the error checks, and it never prints a space after the control sequence. @c void sprint_cs(pointer p) { str_number t; if (p == null_cs) { tprint_esc("csname"); tprint_esc("endcsname"); } else { t = cs_text(p); if (is_active_cs(t)) print(active_cs_value(t)); else print_esc(t); } } void sprint_cs_name(pointer p) { str_number t; if (p != null_cs) { t = cs_text(p); if (is_active_cs(t)) print(active_cs_value(t)); else print(t); } } @ This procedure is never called when |interaction filll)) { tprint("foul"); } else if (order > normal) { tprint("fi"); while (order > sfi) { print_char('l'); decr(order); } } else if (s != NULL) { tprint(s); } } @ The next subroutine prints a whole glue specification @c void print_spec(int p, const char *s) { if (p < 0) { print_char('*'); } else { print_scaled(width(p)); if (s != NULL) tprint(s); if (stretch(p) != 0) { tprint(" plus "); print_glue(stretch(p), stretch_order(p), s); } if (shrink(p) != 0) { tprint(" minus "); print_glue(shrink(p), shrink_order(p), s); } } } @ We can reinforce our knowledge of the data structures just introduced by considering two procedures that display a list in symbolic form. The first of these, called |short_display|, is used in overfull box'' messages to give the top-level description of a list. The other one, called |show_node_list|, prints a detailed description of exactly what is in the data structure. The philosophy of |short_display| is to ignore the fine points about exactly what is inside boxes, except that ligatures and discretionary breaks are expanded. As a result, |short_display| is a recursive procedure, but the recursion is never more than one level deep. @^recursion@> A global variable |font_in_short_display| keeps track of the font code that is assumed to be present when |short_display| begins; deviations from this font will be printed. @c int font_in_short_display; /* an internal font number */ @ Boxes, rules, inserts, whatsits, marks, and things in general that are sort of complicated'' are indicated only by printing \.{[]}'. @c /* So, 0, 1 as well as any large value will behave the same as before. The reason for this extension is that a \name not always makes sense. 0 \foo xyz 1 \foo (bar) 2 xyz 3 xyz 4 5 6 xyz */ void print_font_identifier(internal_font_number f) { str_number fonttext; fonttext = font_id_text(f); if (tracing_fonts_par >= 2 && tracing_fonts_par <= 6) { /* < > is less likely to clash with text parenthesis */ tprint("<"); if (tracing_fonts_par >= 2 && tracing_fonts_par <= 3) { print_font_name(f); if (tracing_fonts_par >= 3 || font_size(f) != font_dsize(f)) { tprint(" @@ "); print_scaled(font_size(f)); tprint("pt"); } } else if (tracing_fonts_par >= 4 && tracing_fonts_par <= 6) { print_int(f); if (tracing_fonts_par >= 5) { tprint(": "); print_font_name(f); if (tracing_fonts_par >= 6 || font_size(f) != font_dsize(f)) { tprint(" @@ "); print_scaled(font_size(f)); tprint("pt"); } } } print_char('>'); } else { /* old method, inherited from pdftex */ if (fonttext > 0) { print_esc(fonttext); } else { tprint_esc("FONT"); print_int(f); } if (tracing_fonts_par > 0) { tprint(" ("); print_font_name(f); if (font_size(f) != font_dsize(f)) { tprint("@@"); print_scaled(font_size(f)); tprint("pt"); } print_char(')'); } } } @ This prints highlights of list |p|. @c void short_display(int p) { while (p != null) { if (is_char_node(p)) { if (lig_ptr(p) != null) { short_display(lig_ptr(p)); } else { if (font(p) != font_in_short_display) { if (!is_valid_font(font(p))) print_char('*'); else print_font_identifier(font(p)); print_char(' '); font_in_short_display = font(p); } print(character(p)); } } else { /* Print a short indication of the contents of node |p| */ print_short_node_contents(p); } p = vlink(p); } } @ The |show_node_list| routine requires some auxiliary subroutines: one to print a font-and-character combination, one to print a token list without its reference count, and one to print a rule dimension. @ This prints |char_node| data. @c void print_font_and_char(int p) { if (!is_valid_font(font(p))) print_char('*'); else print_font_identifier(font(p)); print_char(' '); print(character(p)); } @ This prints token list data in braces @c void print_mark(int p) { print_char('{'); if ((p < (int) fix_mem_min) || (p > (int) fix_mem_end)) tprint_esc("CLOBBERED."); else show_token_list(token_link(p), null, max_print_line - 10); print_char('}'); } @ This prints dimensions of a rule node. @c void print_rule_dimen(scaled d) { if (is_running(d)) print_char('*'); else print_scaled(d); } @ Since boxes can be inside of boxes, |show_node_list| is inherently recursive, @^recursion@> up to a given maximum number of levels. The history of nesting is indicated by the current string, which will be printed at the beginning of each line; the length of this string, namely |cur_length|, is the depth of nesting. A global variable called |depth_threshold| is used to record the maximum depth of nesting for which |show_node_list| will show information. If we have |depth_threshold=0|, for example, only the top level information will be given and no sublists will be traversed. Another global variable, called |breadth_max|, tells the maximum number of items to show at each level; |breadth_max| had better be positive, or you won't see anything. @c int depth_threshold; /* maximum nesting depth in box displays */ int breadth_max; /* maximum number of items shown at the same list level */ @ The recursive machinery is started by calling |show_box|. Assign the values |depth_threshold:=show_box_depth| and |breadth_max:=show_box_breadth| @c void show_box(halfword p) { depth_threshold = show_box_depth_par; breadth_max = show_box_breadth_par; if (breadth_max <= 0) breadth_max = 5; /* the show starts at |p| */ show_node_list(p); print_ln(); } @ Helper for debugging purposes. It prints highlights of list |p| @c void short_display_n(int p, int m) { int i = 0; font_in_short_display = null_font; if (p == null) return; while (p != null) { if (is_char_node(p)) { if (p <= max_halfword) { if (font(p) != font_in_short_display) { if (!is_valid_font(font(p))) print_char('*'); else print_font_identifier(font(p)); print_char(' '); font_in_short_display = font(p); } print(character(p)); } } else { if ( (type(p) == glue_node) || (type(p) == disc_node) || (type(p) == penalty_node) || ((type(p) == kern_node) && (subtype(p) == explicit_kern || subtype(p) == italic_kern ))) { incr(i); } if (i >= m) return; if (type(p) == disc_node) { print_char('|'); short_display(vlink(pre_break(p))); print_char('|'); short_display(vlink(post_break(p))); print_char('|'); } else { /* Print a short indication of the contents of node |p| */ print_short_node_contents(p); } } p = vlink(p); if (p == null) return; } update_terminal(); } @ When debugging a macro package, it can be useful to see the exact control sequence names in the format file. For example, if ten new csnames appear, it's nice to know what they are, to help pinpoint where they came from. (This isn't a truly basic'' printing procedure, but that's a convenient module in which to put it.) @c void print_csnames(int hstart, int hfinish) { int h; unsigned char *c, *l; fprintf(stderr, "fmtdebug:csnames from %d to %d:", (int) hstart, (int) hfinish); for (h = hstart; h <= hfinish; h++) { if (cs_text(h) > 0) { /* we have anything at this position */ c = str_string(cs_text(h)); l = c + str_length(cs_text(h)); while (c < l) { /* print the characters */ fputc(*c++, stderr); } fprintf(stderr, "|"); } } } @ A helper for printing file:line:error style messages. Look for a filename in |full_source_filename_stack|, and if we fail to find one fall back on the non-file:line:error style. @c void print_file_line(void) { int level = in_open; while ((level > 0) && (full_source_filename_stack[level] == 0)) decr(level); if (level == 0) { tprint_nl("! "); } else { tprint_nl(""); tprint(full_source_filename_stack[level]); print_char(':'); if (level == in_open) print_int(line); else print_int(line_stack[level + 1]); tprint(": "); } } @ \TeX\ is occasionally supposed to print diagnostic information that goes only into the transcript file, unless |tracing_online| is positive. Here are two routines that adjust the destination of print commands: @c void begin_diagnostic(void) { global_old_setting = selector; if ((tracing_online_par <= 0) && (selector == term_and_log)) { decr(selector); if (history == spotless) history = warning_issued; } } @ Restore proper conditions after tracing. @c void end_diagnostic(boolean blank_line) { tprint_nl(""); if (blank_line) print_ln(); selector = global_old_setting; } @ Of course we had better declare another global variable, if the previous routines are going to work. @c int global_old_setting; `
|
2019-06-27 00:32:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4599572718143463, "perplexity": 13639.323579472144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000609.90/warc/CC-MAIN-20190626234958-20190627020958-00192.warc.gz"}
|
http://codeforces.com/problemset/problem/204/B
|
B. Little Elephant and Cards
time limit per test
2 seconds
memory limit per test
256 megabytes
input
standard input
output
standard output
The Little Elephant loves to play with color cards.
He has n cards, each has exactly two colors (the color of the front side and the color of the back side). Initially, all the cards lay on the table with the front side up. In one move the Little Elephant can turn any card to the other side. The Little Elephant thinks that a set of cards on the table is funny if at least half of the cards have the same color (for each card the color of the upper side is considered).
Help the Little Elephant to find the minimum number of moves needed to make the set of n cards funny.
Input
The first line contains a single integer n (1 ≤ n ≤ 105) — the number of the cards. The following n lines contain the description of all cards, one card per line. The cards are described by a pair of positive integers not exceeding 109 — colors of both sides. The first number in a line is the color of the front of the card, the second one — of the back. The color of the front of the card may coincide with the color of the back of the card.
The numbers in the lines are separated by single spaces.
Output
On a single line print a single integer — the sought minimum number of moves. If it is impossible to make the set funny, print -1.
Examples
Input
34 74 77 4
Output
0
Input
54 77 42 119 71 1
Output
2
Note
In the first sample there initially are three cards lying with colors 4, 4, 7. Since two of the three cards are of the same color 4, you do not need to change anything, so the answer is 0.
In the second sample, you can turn the first and the fourth cards. After that three of the five cards will be of color 7.
|
2021-02-25 11:35:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22047598659992218, "perplexity": 359.4894245288466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350942.3/warc/CC-MAIN-20210225095141-20210225125141-00003.warc.gz"}
|
https://zbmath.org/?q=an:06946198
|
## Functional data analysis of amplitude and phase variation.(English)Zbl 1426.62146
Summary: The abundance of functional observations in scientific endeavors has led to a significant development in tools for functional data analysis (FDA). This kind of data comes with several challenges: infinite-dimensionality of function spaces, observation noise, and so on. However, there is another interesting phenomenon that creates problems in FDA. The functional data often comes with lateral displacements/deformations in curves, a phenomenon which is different from the height or amplitude variability and is termed phase variation. The presence of phase variability artificially often inflates data variance, blurs underlying data structures, and distorts principal components. While the separation and/or removal of phase from amplitude data is desirable, this is a difficult problem. In particular, a commonly used alignment procedure, based on minimizing the $$\mathbb{L}^{2}$$ norm between functions, does not provide satisfactory results. In this paper, we motivate the importance of dealing with the phase variability and summarize several current ideas for separating phase and amplitude components. These approaches differ in the following: (1) the definition and mathematical representation of phase variability, (2) the objective functions that are used in functional data alignment, and (3) the algorithmic tools for solving estimation/optimization problems. We use simple examples to illustrate various approaches and to provide useful contrast between them.
### MSC:
62G99 Nonparametric inference 62H25 Factor analysis and principal components; correspondence analysis 62J99 Linear inference, regression
Full Text:
### References:
[1] Barlow, R. E., Bartholomew, D. J., Bremner, J. M. and Brunk, H. D. (1972). Statistical Inference Under Order Restrictions. The Theory and Application of Isotonic Regression. Wiley, New York. · Zbl 0246.62038 [2] Boudaoud, S., Rix, H. and Meste, O. (2010). Core shape modelling of a set of curves. Comput. Statist. Data Anal.54 308-325. · Zbl 1464.62032 [3] Dryden, I. L. and Mardia, K. V. (1998). Statistical Shape Analysis. Wiley, Chichester. · Zbl 0901.62072 [4] Gervini, D. and Gasser, T. (2004). Self-modelling warping functions. J. R. Stat. Soc. Ser. B. Stat. Methodol.66 959-971. · Zbl 1061.62052 [5] Grenander, U. (1993). General Pattern Theory. Oxford Univ. Press, New York. · Zbl 0827.68098 [6] Grenander, U. and Miller, M. I. (1998). Computational anatomy: An emerging discipline. Quart. Appl. Math.56 617-694. · Zbl 0952.92016 [7] Hadjipantelis, P. Z., Aston, J. A. D., Müller, H. G. and Evans, J. P. (2015). Unifying amplitude and phase analysis: A compositional data approach to functional multivariate mixed-effects modeling of Mandarin Chinese. J. Amer. Statist. Assoc.110 545-559. [8] Hadjipantelis, P. Z., Aston, J. A. D., Müller, H.-G. and Moriarty, J. (2014). Analysis of spike train data: A multivariate mixed effects model for phase and amplitude. Electron. J. Stat.8 1797-1807. · Zbl 1305.62329 [9] Jung, S., Dryden, I. L. and Marron, J. S. (2012). Analysis of principal nested spheres. Biometrika99 551-568. · Zbl 1437.62507 [10] Kneip, A. and Ramsay, J. O. (2008). Combining registration and fitting for functional models. J. Amer. Statist. Assoc.103 1155-1165. · Zbl 1205.62073 [11] Lavine, B. K. and Workman, J. J. (2013). Chemometrics. Anal. Chem.85 705-714. [12] Liu, X. and Müller, H.-G. (2004). Functional convex averaging and synchronization for time-warped random curves. J. Amer. Statist. Assoc.99 687-699. · Zbl 1117.62392 [13] Liu, X. and Yang, M. C. K. (2009). Simultaneous curve registration and clustering for functional data. Comput. Statist. Data Anal.53 1361-1376. · Zbl 1452.62993 [14] Lu, X. and Marron, J. S. (2013). Principal nested spheres for time warped functional data analysis. Preprint. Available at arXiv:1304.6789. [15] Marron, J. S. and Alonso, A. M. (2014). Overview of object oriented data analysis. Biom. J.56 732-753. · Zbl 1309.62008 [16] Marron, J. S., Ramsay, J. O., Sangalli, L. M. and Srivastava, A. (2014). Statistics of time warpings and phase variations. Electron. J. Stat.8 1697-1702. · Zbl 1305.62015 [17] Ramsay, J. O. and Silverman, B. W. (2005). Functional Data Analysis, 2nd ed. Springer, New York. · Zbl 1079.62006 [18] Sakoe, H. and Chiba, S. (1978). Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans. Acoust. Speech Signal Process.26 43-49. · Zbl 0371.68035 [19] Sangalli, L. M., Secchi, P. and Vantini, S. (2014). Analysis of AneuRisk65 data: $$k$$-mean alignment. Electron. J. Stat.8 1891-1904. · Zbl 1305.62377 [20] Sangalli, L. M., Secchi, P., Vantini, S. and Veneziani, A. (2009). A case study in exploratory functional data analysis: Geometrical features of the internal carotid artery. J. Amer. Statist. Assoc.104 37-48. · Zbl 1388.62191 [21] Sangalli, L. M., Secchi, P., Vantini, S. and Vitelli, V. (2010). $$k$$-mean alignment for curve clustering. Comput. Statist. Data Anal.54 1219-1233. · Zbl 1464.62153 [22] Sotiras, A., Davatzikos, C. and Paragios, N. (2013). Deformable medical image registration: A survey. IEEE Trans. Med. Imag.32 1153-1190. [23] Srivastava, A., Jermyn, I. and Joshi, S. H. (2007). Riemannian analysis of probability density functions with applications in vision. In IEEE Conference on Computer Vision and Pattern Recognition, 2007. CVPR ’07 1-8. Minneapolis, MN, USA. [24] Srivastava, A., Wu, W., Kurtek, S., Klassen, E. and Marron, J. S. (2011a). Registration of functional data using Fisher-Rao metric. Preprint. Available at arXiv:1103.3817v2. [25] Srivastava, A., Klassen, E., Joshi, S. H. and Jermyn, I. H. (2011b). Shape analysis of elastic curves in Euclidean spaces. IEEE Trans. Pattern Anal. Mach. Intell.33 1415-1428. [26] Tang, R. and Müller, H.-G. (2008). Pairwise curve synchronization for functional data. Biometrika95 875-889. · Zbl 1437.62625 [27] Tang, R. and Müller, H.-G. (2009). Time-synchronized clustering of gene expression trajectories. Biostatistics10 32-45. · Zbl 1437.62626 [28] Tucker, J. D., Wu, W. and Srivastava, A. (2013). Generative models for functional data using phase and amplitude separation. Comput. Statist. Data Anal.61 50-66. · Zbl 1349.62253 [29] Tuddenham, R. D. and Snyder, M. M. (1954). Physical growth of California boys and girls from birth to eighteen years. University of California Publication in Child Development1 183-364. [30] Vantini, S. (2012). On the definition of phase and amplitude variability in functional data analysis. TEST21 676-696. · Zbl 1284.62316 [31] Veeraraghavan, A., Srivastava, A., Roy-Chowdhury, A. K. and Chellappa, R. (2009). Rate-invariant recognition of humans and their activities. IEEE Trans. Image Process.18 1326-1339. · Zbl 1371.94384 [32] Wang, H. and Marron, J. S. (2007). Object oriented data analysis: Sets of trees. Ann. Statist.35 1849-1873. · Zbl 1126.62002 [33] Younes, L., Michor, P. W., Shah, J. and Mumford, D. (2008). A metric on shape space with explicit geodesics. Atti Accad. Naz. Lincei Cl. Sci. Fis. Mat. Natur. Rend. Lincei (9) Mat. Appl.19 25-57. · Zbl 1142.58013
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2022-05-18 22:48:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4954020082950592, "perplexity": 9338.996325205353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522556.18/warc/CC-MAIN-20220518215138-20220519005138-00694.warc.gz"}
|
http://tg-bbl.com/35/19/game8.htm
|
XRA [X^bc
IW VS sN IW [ sN
58 17 1Q 6 52 33% tB[hS[ 28%
14 2Q 15 0% t[X[ 25%
7 3Q 14 27 ItFXo 20
10 4Q 7 10 fBtFXo 16
Win 10 BP 10 Lose 17 ^[I[o[ 17
38% 3P 17%
IW @ @ @ @ @ @ @ @ @ @
No I薼 PTS 3P tB[hS[ FT RBD AST STL TO PF TF
M A M A M A DR OR TOT
1 @i 11 3 6 50% 1 3 33% - - - 5 - 5 2 1 - - -
2 ÔgÁ@ 9 - - - 4 6 67% - - - 4 - 4 1 2 2 2 -
3 @ 10 - - - 5 9 56% 0 1 0% 6 5 11 2 3 4 1 -
4 @M 2 - - - 1 4 25% - - - 6 - 6 2 1 2 1 -
5 c@ 2 - - - 1 13 8% - - - 1 2 3 - 2 2 - -
6 R@M 2 - - - 1 7 14% 0 1 0% 2 - 2 2 - 2 - -
7 l@ 3 0 1 0% 1 6 17% - - - 2 - 2 2 1 2 - -
8 ܁@ʉ 9 0 1 0% 4 7 57% - - - 1 3 4 - 1 3 - -
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ S_ 5 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ SoEh 5 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
3 8 18 55 0 2
MFCN@AFAevg@PFpX~X@CFLb~X@VFoCI[V@PFFp[\it@E@TFFeNjJt@E
sN
No I薼 PTS 3P tB[hS[ FT RBD AST STL TO PF TF
M A M A M A DR OR TOT
1 R@i 6 - - - 3 7 43% - - - 3 1 4 3 2 4 - -
2 Έ@ - - - - - - - - - - - - - - - - - -
3 4 - - - 2 6 33% 0 2 0% 7 6 13 1 1 8 1 -
4 R@ 8 1 3 33% 1 8 13% 1 1 100% 2 - 2 - 1 3 1 -
5 l@ 6 - - - 3 7 43% 0 1 0% 2 5 7 - 2 1 - -
6 c@ 8 - - - 2 9 22% 1 1 100% 4 1 5 - - 1 - -
7 F@T 8 1 9 11% 2 5 40% 0 1 0% 2 1 3 3 4 - - -
8 u@t 2 - - - 1 8 13% 0 2 0% - 2 2 1 1 - - -
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ S_ 5 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ SoEh @ @ @ 5 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @
2 12 14 50 2 8
MFCN@AFAevg@PFpX~X@CFLb~X@VFoCI[V@PFFp[\it@E@TFFeNjJt@E@@
|
2021-07-30 01:26:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4704519510269165, "perplexity": 74.72124549058313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153899.14/warc/CC-MAIN-20210729234313-20210730024313-00534.warc.gz"}
|
http://www.maths.usyd.edu.au/s/scnitm/aksamit-StochasticsAndFinanceSemi-001
|
SMS scnews item created by Anna Aksamit at Fri 1 Nov 2019 1147
Type: Seminar
Distribution: World
Expiry: 12 Nov 2019
Calendar1: 5 Nov 2019 1400-1500
CalLoc1: AGR Carslaw 829
CalTitle1: Stochastics and Finance Seminar: Jan Obloj -- Robust finance. Part II
Auth: [email protected] (assumed)
# Stochastics and Finance Seminar: Jan Obloj -- Robust Finance. Part II
Speaker: Prof Jan Obloj (Oxford)
Title: Robust Finance. Part II -- Fundamental Theorems
Abstract: We pursue robust approach to pricing and hedging in mathematical finance. We
develop a general discrete time setting in which some underlying assets and options are
available for dynamic trading and a further set of European options, possibly with
varying maturities, is available for static trading. We include in our setup modelling
beliefs by allowing to specify a set of paths to be considered, e.g. super-replication
of a contingent claim is required only for paths falling in the given set. Our
framework thus interpolates between model-independent and model-specific settings and
allows to quantify the impact of making assumptions. We establish suitable FTAP and
Pricing-Hedging duality results which include as special cases previous results of
Acciaio et al. (2013), Bouchard and Nutz (2015), Burzoni et al. (2016) as well the
Dalang-Morton-Willinger theorem. Finally, we explain how to treat further problems,
such as insider trading (information quantification) or American options pricing. The
talk will cover a body of results developed in collaboration with A. Aksamit, M.
Burzoni, S. Deng, M. Frittelli, Z. Hou, M. Maggis, X. Tan and J. Wiesel.
http://www.maths.usyd.edu.au/u/SemConf/Stochastics_Finance/seminar.html
If you are registered you may mark the scnews item as read.
School members may try to .
|
2020-02-22 04:26:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4213102161884308, "perplexity": 9066.993575234095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145648.56/warc/CC-MAIN-20200222023815-20200222053815-00236.warc.gz"}
|
https://motls.blogspot.com/2008/04/disorder-on-landscape.html?m=1
|
## Wednesday, April 16, 2008
### Disorder on the landscape
Dmitry I. Podolsky, Jaydeep Majumder, Niko Jokela (PDF) propose a new, possibly crucial mechanism of the vacuum selection.
Dmitry also has a highly serious blog, Neqnet.
Their paper is somewhat similar to the papers about resonance tunneling and landscape percolation even though Podolsky et al. do not even cite the percolation papers despite the name of Henry Tye in the acknowledgements.
They also try to find preferred regions or points on the landscape. And they end up with the vacua with a small number of neighbors that could also be the Bouchard-Donagi heterotic model discussed yesterday (and a year ago). I intuitively feel that the right answer must be of this kind because these "individualist" vacua are special in a certain invariant sense. How are they led to similar conclusions?
A disorder on the landscape
They investigate the evolution of the probability distribution on the landscape (including the information about the probability to measure a certain value of the vacuum energy). Assuming some built-in disorders on the landscape, their task turns out to be similar to problems known to condensed matter physicists, especially those who like random media.
Now, Philip Andersson, a physics Nobel prize winner, is a passionate enemy of string theory. But one of the rules of science is that Nature doesn't and can't prevent your enemies who also study Her to find your results relevant for their enterprise. ;-) And the enemies are not guaranteed to fully avoid the results of their field's enemy either!
So the final step that Podolsky et al. have to make to end up with their conclusion is to find Anderson localization on the landscape which completely changes the dynamics of eternal inflation, relatively to the Bousso-Polchinski sector, especially in the sectors of the landscape with the Hausdorff dimension of the tunneling graph (a quantity related to the number of independent types of decay instantons or domain walls) smaller than three. This condensed matter phenomenon - the absence of diffusion of waves in random media - is very similar to the slowdown of the diffusion of the probability ways in the "individualist" corners of the landscape.
I am convinced that some people should study these "individualist" vacua with as small as possible a number of neighbors, decay channels, or types of decay channels regardless of the precise justification of their special role simply because special things deserve a special treatment. In physics, people have often started with "minimal", "simplest" theories - such as pure general relativity with the Einstein-Hilbert action - even though they didn't exactly know the reason (renormalization group, in the case of simple actions, which was not known to Einstein). It seems that the "individualist" vacua are the best landscape counterpart of the "minimal" theories from the past.
That's where I see the light and that's where the keys should be looked for. ;-)
These authors use methods of dynamical renormalization group and classes of type IIB vacua with a couple of Klebanov-Strassler throats. Although I don't quite understand what it means to treat the probability distribution on the landscape as a function of time and whose time it exactly is - and despite superficial similarities of the paper to some purely anthropic, dull papers - I think that they present quite a lot of new physics that could turn out to be relevant for the vacuum selection problem.
There is one general feature that makes this paper (and a couple of others) much more sensible and attractive than generic, dull anthropic papers. It actually cares about the transitions between the vacua. These transitions induce generalized "kinetic terms" that are guaranteed to influence the "cosmological dynamics", independently of the right way how the measure should be ultimately defined. The dull anthropic papers only count the vacua which is similar to a field theory without any kinetic terms. Once you appreciate the quantum tunneling as a generalization of kinetic terms, the "individualist" points and places naturally become more stable and gain a significantly higher measure in the probability counting. They are special, after all. Nature is searching for simplicity as long as the word simplicity encodes a valid feature of the physical laws.
Bonus
You know, instead of some metaphysical quotas and anthropic pseudolaws, I like real physical laws such as the proportionality between the force and the acceleration, or the principle of stationary action, the continuity law for the stress-energy tensor, the Klein-Gordon equation, or the Maxwell-Boltzmann distribution. If you find these concepts too abstract and you don't see why they're so physical, I hope that this figure will help you.
And that's a visual memo.
#### 1 comment:
1. That is so attractive! Bad girl meets brainy girl. :D
|
2021-04-19 23:05:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5774089694023132, "perplexity": 922.1704869133848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038917413.71/warc/CC-MAIN-20210419204416-20210419234416-00342.warc.gz"}
|
https://nimahejazi.org/publication/cvcovest_selector/
|
# Cross-validated loss-based covariance matrix estimator selection in high dimensions
### Abstract
The covariance matrix plays a fundamental role in many modern exploratory and inferential statistical procedures, including dimensionality reduction, hypothesis testing, and regression. In low-dimensional regimes, where the number of observations far exceeds the number of variables, the optimality of the sample covariance matrix as an estimator of this parameter is well-established. High-dimensional regimes do not admit such a convenience, however. As such, a variety of estimators have been derived to overcome the shortcomings of the sample covariance matrix in these settings. Yet, the question of selecting an optimal estimator from among the plethora available remains largely unaddressed. Using the framework of cross-validated loss-based estimation, we develop the theoretical underpinnings of just such an estimator selection procedure. In particular, we propose a general class of loss functions for covariance matrix estimation and establish finite-sample risk bounds and conditions for the asymptotic optimality of the cross-validated estimator selector with respect to these loss functions. We evaluate our proposed approach via a comprehensive set of simulation experiments and demonstrate its practical benefits by application in the exploratory analysis of two single-cell transcriptome sequencing datasets. A free and open-source software implementation of the proposed methodology, the cvCovEst R package, is briefly introduced.
Type
##### Nima Hejazi
###### PhD Candidate in Biostatistics
My research interests lie at the intersection of causal inference and machine learning, especially as applied to the statistical analysis of complex data from observational studies and experiments in the biomedical and health sciences.
|
2021-06-16 09:44:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28319936990737915, "perplexity": 587.8460905931487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623596.16/warc/CC-MAIN-20210616093937-20210616123937-00602.warc.gz"}
|
https://mathoverflow.net/questions/377061/does-the-fourier-expansion-of-the-j-function-have-any-prime-coefficients/377448
|
# Does the Fourier expansion of the j-function have any prime coefficients?
Title asks it: Does the Fourier expansion of the j-function have any prime coefficients?
A superabundance of congruences involving primes up to 13 rule out many candidates, but calculation suggests that primes $$p>13$$ occur as divisors at frequencies (about?) $$1/p$$.
But $$c_{71}=278775024890624328476718493296348769305198947=(353) (5533876049689057963) (142708463580969897033673)$$ so that might count as a near miss.
That said, though composite, none of $$c_{1319},c_{1559},c_{1871},c_{2111},c_{2231},c_{3239},c_{3551}, c_{4271}, c_{4799}, c_{5471}$$,... has a factor less than 100.
• The coefficients $c_{n}$ for $1 \leq n \leq 5 \cdot 10^{5}$ are all composite. Given that $c_{n} \sim e^{4 \pi \sqrt{n}}/(\sqrt{2} n^{3/4})$ it seems likely that there are some prime coefficients. – Jeremy Rouse Nov 21 '20 at 16:17
• Tabulation of the coefficients with many links to the lliterature at oeis.org/A000521 Factorizations up to $n=1000$ given at asahi-net.or.jp/~KC2H-MSM/mathland/matha1 – Gerry Myerson Nov 21 '20 at 23:20
• @JeremyRouse In fact, given that growth rate, our naive expectation should be infinitely many prime coefficients. – JoshuaZ Nov 25 '20 at 21:25
• I discovered a bug in my program (a type error) that leads to all the coefficients being labelled "not prime". – Jeremy Rouse Nov 25 '20 at 22:21
• The sum over the primes $p>13$ of $1/p$ (your expected frequencies) is divergent, so maybe a study of the relative frequency $f_{k}$ of the coefficients having exactly $k$ prime factors counted with multiplicity could shed some light on the problem. – Sylvain JULIEN Nov 25 '20 at 22:37
There are seven prime values (passing a BPSW test) of $$c_n$$ with $$n \le 2 \cdot 10^7$$, at indices 457871, 685031, 1029071, 1101431, 9407831, 11769911, and 18437999.
For a writeup about the computations, source code, and the prime numbers themselves, see: https://github.com/fredrik-johansson/jfunction
The first prime $$c_{457871}$$ is the following 3689-digit number:
30801636514011810817665318374658229108845878479020424564852145048550477094266397753615620563343543122809859646561082297428127434662591162587465688189470673559500946738365120052237063677089165938711865291826029106118944935275575747924692016273156986404534483801538929131770513483035012136870394657959401728135298020394188493171098233244180987908576898694823463564573986521378977434641833939268448907892425327423931702787985965542437137823321051154070271812702040044018249180573203850835848455588503282760349067467783417988439399728717693269653344946303351200847243267479008953480983893357261716684979093234554524907570654175153716092013381021033631140271230374127359907711968372522745844402375355449952404485703394467619855566837872874898909971592754384802113052943616244616228982238122927893868687205058321497136583843454449483568143398274104240118934930464297655670020151675237805373543868606164624359906981163728280265160187710553222515223577715624809615213830941718538338330718318212962253165154784990727034694055424785446779353587418545233599512401297307560287330719685869795522920270669609498639686158362603820371777436303198350821510562028940026062291473250204511985677264132923030166903832835499326969147438699601413777550018800825631136927531190121891919023537853918707520543367800297170322643557011053113196699442392677721251164071912545596204918917566579754931649583289900867072630307177448327512748573948720052532426026582228531694404885199310535255097152264726060203165225244982020696634699998363392968546865623955653923292400203238704016331998301098882719280237294694368706187727194309664172903851082095220551328392538250319846241916349485122880753951745944930504836900929002975743487891022913325212927877123283110371004511930228354316650588540693893785256904915901446318107624914828072059779823732136657855374843120155979700684426795681799949828354496446059153417627020927839115150227802126261918915561804876068184778844391781134858921611498880947743310539662060436623031296503497690935576519684398269951037522254928920040638913033916447480995451172728711833839757712418893132702856601768693707767408660072694546475350179623544013680656723320135856226549734491873157405584154845513711221581080660024268777736123563250061564832634510929167076111162819878757339574946474890631427413494266622011134541083389565676533915214975782362935488902596227893270340685189472412328774565836766141144309983845007818359559400419277334332676913295096744471767890050480297665094988593656013858404531156346968039028944208325870737014099138274367178622028282421548639862174878971583355345514824564041789417768026191489674685601130884770272042320774286085741557976163690129647685531361972350643878463985533720888119015787469601005630621659687749812528402893568495431951749454436584549713954945990865871059056586397539337257860037142247888078726810588665972975365062119719322236620399267595421045468328581519496916075539958247075033250213251987450634988759698630929366797978086037651833328039578740169593342409162117195011259429537049154618053581738781731898468529654264215451362063751162610494664217028387863630499363451607405876105244341222047086047630946033922722790116254686279821386138836326264599571591967081705529398847838399554521937548523334357827287180353919911626556542110462725508733594289737987833526697420630616509010365868192637613354631155273335447830149024265581051436057754643260886600218454986343035650276055047068090749733940363539512760186969985697586068479926077515927351409212093813617210155525035138051400826519032339949113881712130873854263099726139035786383205124051644483087596049043497124737908938233702928805190964560673396398573801431769834039050534841422792714076699
• FWIW: Mathematica confirms that the two numbers listed above are prime. – Aeryk Nov 25 '20 at 19:31
• @SylvainJULIEN Not sure what method Fredrik used, but Pari GP (and hence also Sage) use the Baillie-PSW primality test for pseudoprimality. No failures of this test are known, but I don't think any standard conjectures guarantee its correctness. According to the article Mathematica's PrimeQ also uses this test. – Wojowu Nov 25 '20 at 19:59
• I checked with Pari and Flint which both implement (possibly slightly different versions of) BPSW which has no known counterexamples. In any case, these numbers are small enough that they can be certified prime in reasonable time (hours or days). – Fredrik Johansson Nov 25 '20 at 20:04
• Fantastic. Thanks. Proving now that infinitely many such primes exist seems very difficult. – David Feldman Nov 25 '20 at 20:45
• The ECPP test completed - $c_{457871}$ is prime. Primo 4.3.2 took 11336 seconds to verify it (running on six cores on a fairly old machine). – Jeremy Rouse Nov 28 '20 at 17:57
|
2021-07-27 16:23:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7290753126144409, "perplexity": 1247.194160681965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153392.43/warc/CC-MAIN-20210727135323-20210727165323-00702.warc.gz"}
|
https://doc.sagemath.org/html/en/reference/game_theory/sage/game_theory/cooperative_game.html
|
Co-operative Games With Finite Players¶
This module implements a class for a characteristic function cooperative game. Methods to calculate the Shapley value (a fair way of sharing common resources: see [CEW2011]) as well as test properties of the game (monotonicity, superadditivity) are also included.
AUTHORS:
• James Campbell and Vince Knight (06-2014): Original version
class sage.game_theory.cooperative_game.CooperativeGame(characteristic_function)
An object representing a co-operative game. Primarily used to compute the Shapley value, but can also provide other information.
INPUT:
• characteristic_function – a dictionary containing all possible sets of players:
• key - each set must be entered as a tuple.
• value - a real number representing each set of players contribution
EXAMPLES:
The type of game that is currently implemented is referred to as a Characteristic function game. This is a game on a set of players $$\Omega$$ that is defined by a value function $$v : C \to \RR$$ where $$C = 2^{\Omega}$$ is the set of all coalitions of players. Let $$N := |\Omega|$$. An example of such a game is shown below:
$\begin{split}v(c) = \begin{cases} 0 &\text{if } c = \emptyset, \\ 6 &\text{if } c = \{1\}, \\ 12 &\text{if } c = \{2\}, \\ 42 &\text{if } c = \{3\}, \\ 12 &\text{if } c = \{1,2\}, \\ 42 &\text{if } c = \{1,3\}, \\ 42 &\text{if } c = \{2,3\}, \\ 42 &\text{if } c = \{1,2,3\}. \\ \end{cases}\end{split}$
The function $$v$$ can be thought of as a record of contribution of individuals and coalitions of individuals. Of interest, becomes how to fairly share the value of the grand coalition ($$\Omega$$)? This class allows for such an answer to be formulated by calculating the Shapley value of the game.
Basic examples of how to implement a co-operative game. These functions will be used repeatedly in other examples.
sage: integer_function = {(): 0,
....: (1,): 6,
....: (2,): 12,
....: (3,): 42,
....: (1, 2,): 12,
....: (1, 3,): 42,
....: (2, 3,): 42,
....: (1, 2, 3,): 42}
sage: integer_game = CooperativeGame(integer_function)
We can also use strings instead of numbers.
sage: letter_function = {(): 0,
....: ('A',): 6,
....: ('B',): 12,
....: ('C',): 42,
....: ('A', 'B',): 12,
....: ('A', 'C',): 42,
....: ('B', 'C',): 42,
....: ('A', 'B', 'C',): 42}
sage: letter_game = CooperativeGame(letter_function)
Please note that keys should be tuples. '1, 2, 3' is not a valid key, neither is 123. The correct input would be (1, 2, 3). Similarly, for coalitions containing a single element the bracket notation (which tells Sage that it is a tuple) must be used. So (1), (1,) are correct however simply inputting $$1$$ is not.
Characteristic function games can be of various types.
A characteristic function game $$G = (N, v)$$ is monotone if it satisfies $$v(C_2) \geq v(C_1)$$ for all $$C_1 \subseteq C_2$$. A characteristic function game $$G = (N, v)$$ is superadditive if it satisfies $$v(C_1 \cup C_2) \geq v(C_1) + v(C_2)$$ for all $$C_1, C_2 \subseteq 2^{\Omega}$$ such that $$C_1 \cap C_2 = \emptyset$$.
We can test if a game is monotonic or superadditive.
sage: letter_game.is_monotone()
True
False
Instances have a basic representation that will display basic information about the game:
sage: letter_game
A 3 player co-operative game
It can be shown that the “fair” payoff vector, referred to as the Shapley value is given by the following formula:
$\phi_i(G) = \frac{1}{N!} \sum_{\pi\in\Pi_n} \Delta_{\pi}^G(i),$
where the summation is over the permutations of the players and the marginal contributions of a player for a given permutation is given as:
$\Delta_{\pi}^G(i) = v\bigl( S_{\pi}(i) \cup \{i\} \bigr) - v\bigl( S_{\pi}(i) \bigr)$
where $$S_{\pi}(i)$$ is the set of predecessors of $$i$$ in $$\pi$$, i.e. $$S_{\pi}(i) = \{ j \mid \pi(i) > \pi(j) \}$$ (or the number of inversions of the form $$(i, j)$$).
This payoff vector is “fair” in that it has a collection of properties referred to as: efficiency, symmetry, additivity and Null player. Some of these properties are considered in this documentation (and tests are implemented in the class) but for a good overview see [CEW2011].
Note ([MSZ2013]) that an equivalent formula for the Shapley value is given by:
$\phi_i(G) = \sum_{S \subseteq \Omega} \sum_{p \in S} \frac{(|S|-1)!(N-|S|)!}{N!} \bigl( v(S) - v(S \setminus \{p\}) \bigr) = \sum_{S \subseteq \Omega} \sum_{p \in S} \frac{1}{|S|\binom{N}{|S|}} \bigl( v(S) - v(S \setminus \{p\}) \bigr).$
This later formulation is implemented in Sage and requires $$2^N-1$$ calculations instead of $$N!$$.
To compute the Shapley value in Sage is simple:
sage: letter_game.shapley_value()
{'A': 2, 'B': 5, 'C': 35}
The following example implements a (trivial) 10 player characteristic function game with $$v(c) = |c|$$ for all $$c \in 2^{\Omega}$$.
sage: def simple_characteristic_function(N):
....: return {tuple(coalition) : len(coalition)
....: for coalition in subsets(range(N))}
sage: g = CooperativeGame(simple_characteristic_function(10))
sage: g.shapley_value()
{0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1, 7: 1, 8: 1, 9: 1}
For very large games it might be worth taking advantage of the particular problem structure to calculate the Shapley value and there are also various approximation approaches to obtaining the Shapley value of a game (see [SWJ2008] for one such example). Implementing these would be a worthwhile development For more information about the computational complexity of calculating the Shapley value see [XP1994].
We can test 3 basic properties of any payoff vector $$\lambda$$. The Shapley value (described above) is known to be the unique payoff vector that satisfies these and 1 other property not implemented here (additivity). They are:
• Efficiency - $$\sum_{i=1}^N \lambda_i = v(\Omega)$$ In other words, no value of the total coalition is lost.
• The nullplayer property - If there exists an $$i$$ such that $$v(C \cup i) = v(C)$$ for all $$C \in 2^{\Omega}$$ then, $$\lambda_i = 0$$. In other words: if a player does not contribute to any coalition then that player should receive no payoff.
• Symmetry property - If $$v(C \cup i) = v(C \cup j)$$ for all $$C \in 2^{\Omega} \setminus \{i,j\}$$, then $$x_i = x_j$$. If players contribute symmetrically then they should get the same payoff:
sage: payoff_vector = letter_game.shapley_value()
sage: letter_game.is_efficient(payoff_vector)
True
sage: letter_game.nullplayer(payoff_vector)
True
sage: letter_game.is_symmetric(payoff_vector)
True
Any payoff vector can be passed to the game and these properties can once again be tested:
sage: payoff_vector = {'A': 0, 'C': 35, 'B': 3}
sage: letter_game.is_efficient(payoff_vector)
False
sage: letter_game.nullplayer(payoff_vector)
True
sage: letter_game.is_symmetric(payoff_vector)
True
is_efficient(payoff_vector)
Return True if payoff_vector is efficient.
A payoff vector $$v$$ is efficient if $$\sum_{i=1}^N \lambda_i = v(\Omega)$$; in other words, no value of the total coalition is lost.
INPUT:
• payoff_vector – a dictionary where the key is the player and the value is their payoff
EXAMPLES:
An efficient payoff vector:
sage: letter_function = {(): 0,
....: ('A',): 6,
....: ('B',): 12,
....: ('C',): 42,
....: ('A', 'B',): 12,
....: ('A', 'C',): 42,
....: ('B', 'C',): 42,
....: ('A', 'B', 'C',): 42}
sage: letter_game = CooperativeGame(letter_function)
sage: letter_game.is_efficient({'A': 14, 'B': 14, 'C': 14})
True
sage: letter_function = {(): 0,
....: ('A',): 6,
....: ('B',): 12,
....: ('C',): 42,
....: ('A', 'B',): 12,
....: ('A', 'C',): 42,
....: ('B', 'C',): 42,
....: ('A', 'B', 'C',): 42}
sage: letter_game = CooperativeGame(letter_function)
sage: letter_game.is_efficient({'A': 10, 'B': 14, 'C': 14})
False
A longer example:
sage: long_function = {(): 0,
....: (1,): 0,
....: (2,): 0,
....: (3,): 0,
....: (4,): 0,
....: (1, 2): 0,
....: (1, 3): 0,
....: (1, 4): 0,
....: (2, 3): 0,
....: (2, 4): 0,
....: (3, 4): 0,
....: (1, 2, 3): 0,
....: (1, 2, 4): 45,
....: (1, 3, 4): 40,
....: (2, 3, 4): 0,
....: (1, 2, 3, 4): 65}
sage: long_game = CooperativeGame(long_function)
sage: long_game.is_efficient({1: 20, 2: 20, 3: 5, 4: 20})
True
is_monotone()
Return True if self is monotonic.
A game $$G = (N, v)$$ is monotonic if it satisfies $$v(C_2) \geq v(C_1)$$ for all $$C_1 \subseteq C_2$$.
EXAMPLES:
A simple game that is monotone:
sage: integer_function = {(): 0,
....: (1,): 6,
....: (2,): 12,
....: (3,): 42,
....: (1, 2,): 12,
....: (1, 3,): 42,
....: (2, 3,): 42,
....: (1, 2, 3,): 42}
sage: integer_game = CooperativeGame(integer_function)
sage: integer_game.is_monotone()
True
An example when the game is not monotone:
sage: integer_function = {(): 0,
....: (1,): 6,
....: (2,): 12,
....: (3,): 42,
....: (1, 2,): 10,
....: (1, 3,): 42,
....: (2, 3,): 42,
....: (1, 2, 3,): 42}
sage: integer_game = CooperativeGame(integer_function)
sage: integer_game.is_monotone()
False
An example on a longer game:
sage: long_function = {(): 0,
....: (1,): 0,
....: (2,): 0,
....: (3,): 0,
....: (4,): 0,
....: (1, 2): 0,
....: (1, 3): 0,
....: (1, 4): 0,
....: (2, 3): 0,
....: (2, 4): 0,
....: (3, 4): 0,
....: (1, 2, 3): 0,
....: (1, 2, 4): 45,
....: (1, 3, 4): 40,
....: (2, 3, 4): 0,
....: (1, 2, 3, 4): 65}
sage: long_game = CooperativeGame(long_function)
sage: long_game.is_monotone()
True
is_superadditive()
Return True if self is superadditive.
A characteristic function game $$G = (N, v)$$ is superadditive if it satisfies $$v(C_1 \cup C_2) \geq v(C_1) + v(C_2)$$ for all $$C_1, C_2 \subseteq 2^{\Omega}$$ such that $$C_1 \cap C_2 = \emptyset$$.
EXAMPLES:
An example that is not superadditive:
sage: integer_function = {(): 0,
....: (1,): 6,
....: (2,): 12,
....: (3,): 42,
....: (1, 2,): 12,
....: (1, 3,): 42,
....: (2, 3,): 42,
....: (1, 2, 3,): 42}
sage: integer_game = CooperativeGame(integer_function)
False
sage: A_function = {(): 0,
....: (1,): 6,
....: (2,): 12,
....: (3,): 42,
....: (1, 2,): 18,
....: (1, 3,): 48,
....: (2, 3,): 55,
....: (1, 2, 3,): 80}
sage: A_game = CooperativeGame(A_function)
True
An example with a longer game that is superadditive:
sage: long_function = {(): 0,
....: (1,): 0,
....: (2,): 0,
....: (3,): 0,
....: (4,): 0,
....: (1, 2): 0,
....: (1, 3): 0,
....: (1, 4): 0,
....: (2, 3): 0,
....: (2, 4): 0,
....: (3, 4): 0,
....: (1, 2, 3): 0,
....: (1, 2, 4): 45,
....: (1, 3, 4): 40,
....: (2, 3, 4): 0,
....: (1, 2, 3, 4): 65}
sage: long_game = CooperativeGame(long_function)
True
An example with a longer game that is not:
sage: long_function = {(): 0,
....: (1,): 0,
....: (2,): 0,
....: (3,): 55,
....: (4,): 0,
....: (1, 2): 0,
....: (1, 3): 0,
....: (1, 4): 0,
....: (2, 3): 0,
....: (2, 4): 0,
....: (3, 4): 0,
....: (1, 2, 3): 0,
....: (1, 2, 4): 45,
....: (1, 3, 4): 40,
....: (2, 3, 4): 0,
....: (1, 2, 3, 4): 85}
sage: long_game = CooperativeGame(long_function)
False
is_symmetric(payoff_vector)
Return True if payoff_vector possesses the symmetry property.
A payoff vector possesses the symmetry property if $$v(C \cup i) = v(C \cup j)$$ for all $$C \in 2^{\Omega} \setminus \{i,j\}$$, then $$x_i = x_j$$.
INPUT:
• payoff_vector – a dictionary where the key is the player and the value is their payoff
EXAMPLES:
A payoff vector that has the symmetry property:
sage: letter_function = {(): 0,
....: ('A',): 6,
....: ('B',): 12,
....: ('C',): 42,
....: ('A', 'B',): 12,
....: ('A', 'C',): 42,
....: ('B', 'C',): 42,
....: ('A', 'B', 'C',): 42}
sage: letter_game = CooperativeGame(letter_function)
sage: letter_game.is_symmetric({'A': 5, 'B': 14, 'C': 20})
True
A payoff vector that returns False:
sage: integer_function = {(): 0,
....: (1,): 12,
....: (2,): 12,
....: (3,): 42,
....: (1, 2,): 12,
....: (1, 3,): 42,
....: (2, 3,): 42,
....: (1, 2, 3,): 42}
sage: integer_game = CooperativeGame(integer_function)
sage: integer_game.is_symmetric({1: 2, 2: 5, 3: 35})
False
A longer example for symmetry:
sage: long_function = {(): 0,
....: (1,): 0,
....: (2,): 0,
....: (3,): 0,
....: (4,): 0,
....: (1, 2): 0,
....: (1, 3): 0,
....: (1, 4): 0,
....: (2, 3): 0,
....: (2, 4): 0,
....: (3, 4): 0,
....: (1, 2, 3): 0,
....: (1, 2, 4): 45,
....: (1, 3, 4): 40,
....: (2, 3, 4): 0,
....: (1, 2, 3, 4): 65}
sage: long_game = CooperativeGame(long_function)
sage: long_game.is_symmetric({1: 20, 2: 20, 3: 5, 4: 20})
True
nullplayer(payoff_vector)
Return True if payoff_vector possesses the nullplayer property.
A payoff vector $$v$$ has the nullplayer property if there exists an $$i$$ such that $$v(C \cup i) = v(C)$$ for all $$C \in 2^{\Omega}$$ then, $$\lambda_i = 0$$. In other words: if a player does not contribute to any coalition then that player should receive no payoff.
INPUT:
• payoff_vector – a dictionary where the key is the player and the value is their payoff
EXAMPLES:
A payoff vector that returns True:
sage: letter_function = {(): 0,
....: ('A',): 0,
....: ('B',): 12,
....: ('C',): 42,
....: ('A', 'B',): 12,
....: ('A', 'C',): 42,
....: ('B', 'C',): 42,
....: ('A', 'B', 'C',): 42}
sage: letter_game = CooperativeGame(letter_function)
sage: letter_game.nullplayer({'A': 0, 'B': 14, 'C': 14})
True
A payoff vector that returns False:
sage: A_function = {(): 0,
....: (1,): 0,
....: (2,): 12,
....: (3,): 42,
....: (1, 2,): 12,
....: (1, 3,): 42,
....: (2, 3,): 55,
....: (1, 2, 3,): 55}
sage: A_game = CooperativeGame(A_function)
sage: A_game.nullplayer({1: 10, 2: 10, 3: 25})
False
A longer example for nullplayer:
sage: long_function = {(): 0,
....: (1,): 0,
....: (2,): 0,
....: (3,): 0,
....: (4,): 0,
....: (1, 2): 0,
....: (1, 3): 0,
....: (1, 4): 0,
....: (2, 3): 0,
....: (2, 4): 0,
....: (3, 4): 0,
....: (1, 2, 3): 0,
....: (1, 2, 4): 45,
....: (1, 3, 4): 40,
....: (2, 3, 4): 0,
....: (1, 2, 3, 4): 65}
sage: long_game = CooperativeGame(long_function)
sage: long_game.nullplayer({1: 20, 2: 20, 3: 5, 4: 20})
True
shapley_value()
Return the Shapley value for self.
The Shapley value is the “fair” payoff vector and is computed by the following formula:
$\phi_i(G) = \sum_{S \subseteq \Omega} \sum_{p \in S} \frac{1}{|S|\binom{N}{|S|}} \bigl( v(S) - v(S \setminus \{p\}) \bigr).$
EXAMPLES:
A typical example of computing the Shapley value:
sage: integer_function = {(): 0,
....: (1,): 6,
....: (2,): 12,
....: (3,): 42,
....: (1, 2,): 12,
....: (1, 3,): 42,
....: (2, 3,): 42,
....: (1, 2, 3,): 42}
sage: integer_game = CooperativeGame(integer_function)
sage: integer_game.player_list
(1, 2, 3)
sage: integer_game.shapley_value()
{1: 2, 2: 5, 3: 35}
A longer example of the Shapley value:
sage: long_function = {(): 0,
....: (1,): 0,
....: (2,): 0,
....: (3,): 0,
....: (4,): 0,
....: (1, 2): 0,
....: (1, 3): 0,
....: (1, 4): 0,
....: (2, 3): 0,
....: (2, 4): 0,
....: (3, 4): 0,
....: (1, 2, 3): 0,
....: (1, 2, 4): 45,
....: (1, 3, 4): 40,
....: (2, 3, 4): 0,
....: (1, 2, 3, 4): 65}
sage: long_game = CooperativeGame(long_function)
sage: long_game.shapley_value()
{1: 70/3, 2: 10, 3: 25/3, 4: 70/3}
|
2019-11-18 04:40:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31708386540412903, "perplexity": 13482.940552380356}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669431.13/warc/CC-MAIN-20191118030116-20191118054116-00485.warc.gz"}
|
http://nrich.maths.org/712/solution
|
### Pinned Squares
The diagram shows a 5 by 5 geoboard with 25 pins set out in a square array. Squares are made by stretching rubber bands round specific pins. What is the total number of squares that can be made on a 5 by 5 board?
### Poly-puzzle
This rectangle is cut into five pieces which fit exactly into a triangular outline and also into a square outline where the triangle, the rectangle and the square have equal areas.
### Power Crazy
What can you say about the values of n that make $7^n + 3^n$ a multiple of 10? Are there other pairs of integers between 1 and 10 which have similar properties?
# Ratty
##### Stage: 3 Challenge Level:
If you know the sizes of the angles marked with coloured dots in this diagram which angles can you find by calculation? If in addition you know that the lines $AB$ and $CD$ are parallel which angles can you find?
Congratulations on your solutions to Ian Walker, Owen Jones and Tom Embury (Y7) St James Middle School, Bury St Edmunds, to students from Y9 and Y10, The Mount School, York and to Shabbir Tejani, age 13, Jack Hunt School, Peterborough. Here is Shabbir's solution:
Angle $G = 180$ - (angle $Y$ + angle $X$)
Angle $E = 180$ - angle $G$
Angle $F = 180$ - (angle $E$ + angle $Z$)
Angle $J =$ angle $F$
Angle $H = 180$ - angle $F$
Angle $I = 360$ - (angle $X$ + angle $H$ + angle $G$)
Angle $O = 180$ - angle $I$
Angle $N = 180$ - angle $O$
These are all the angles that can be found without additional information. If we are given the added information that the lines $AB$ and $CD$ are parallel then we know:
Angle $M =$ angle $Z$
Angle $L = 180$ - (angle $M$ + angle $N$)
Angle $P = 180$ - angle $L$
Angle $K = 180$ - angle $G$
Angle $Q = 180$ - angle $P$
Angle $R = 180$ - angle $K$
Angle $T = 180$ - angle $R$
Angle S and U cannot be found.
HOWEVER if lines $CC'$ and $AD$ are parallel, then angle $M =$ angle $S$. Therefore Angle $U = 180$ - (angle $S$ + angle $T$).
|
2014-11-28 14:47:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3877009451389313, "perplexity": 1344.0800259465748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931010469.50/warc/CC-MAIN-20141125155650-00199-ip-10-235-23-156.ec2.internal.warc.gz"}
|
https://laurentlessard.com/bookproofs/number-shuffle/
|
# Number shuffle
This is a number theory problem from the Riddler. Simple problem, not-so-simple solution!
Imagine taking a number and moving its last digit to the front. For example, 1,234 would become 4,123. What is the smallest positive integer such that when you do this, the result is exactly double the original number?
Here is my solution:
[Show Solution]
## 6 thoughts on “Number shuffle”
1. Jason Weisman says:
Hi Laurent, interesting solution and appreciate the explanation of your thought process. I got the same answer but using a mechanical process with significantly less thought going into it. If you assume the last digit of the solution and use that as a “seed”, the answer falls out quickly.
Assume the last digit of the solution is 4, that means the next-to-last digit is 8 (2×4), the third-to-last digit is 6 (2×8=16, carry the one), fourth-to-last digit is 3 (2×6+1 carried from previously), fifth-to-last digit is 7 (2×3+1), etc. Continue until the first digit repeats the last digit. At that point you will be guaranteed to have a number which taking the last digit and moving to the front is double the original number, i.e., a “solution” but not necessarily the smallest solution (i.e., solution to the 538 puzzle).
Following this procedure the solution assuming 4 as seed the “solution” is: 210526315789473684
Follow the same procedure using different last digit seeds to fairly quickly find all of the possible answers, the smallest being the 538 puzzle solution. That is the same as your answer: 105263157894736842
1. Thanks for the comment! Hadn’t occurred to me to just try different values for the last digit and then build the number from there!
1. Jason Weisman says:
Yes, I didn’t put as much thought into this week’s puzzle as usual. Actually in my solution you only need to arbitrarily choose one value for the last digit. All the other possible solutions fall out directly since the different solutions are just repeating cycles of that one. It’s apparent that the solution choosing 2 is the same as the solution choosing 4, just take the first digit and move it to the end.
1. Same is true in my solution, actually. First step is to pick $n$ such that $10^n-2$ is a multiple of $19$. This happens precisely when $n=17+18k$ for some $k$. Then, you pick $d_0$ (the last digit), and all solutions must have the form:
$\tfrac{10^{n+1}-1}{19}d_0$so long as the number $\tfrac{10^n-2}{19}$ has $n$ digits.
The simplest solution is when $k=0$ and $d_0=1$, but can increase $d_0$ to get the rest of the solutions and then increase $k$, etc.
1. Jason Weisman says:
Laurent, I worked out the below list of other minimm values for “m-number shuffles”, i.e., numbers that replicate when multiplying by m and moving the last digit to the front. I used the seeding method to find solutions. Wondering whether your solution also generalizes for different m-multiples and can derive the same results.
2×105263157894736842 (18 digits)
3×1034482758620689655172413793 (28 digits)
4×102564 (6 digits)
5×102040816326530612244897959183673469387755 (42 digits)
6×1016949152542372881355932203389830508474576271186440677966 (58 digits)
7×1014492753623188405797 (22 digits)
8×1012658227848 (13 digits)
9×10112359550561797752808988764044943820224719 (44 digits)
2. Hi Jason,
Yes, the solution generalizes. For example, if the number has the form $Md_0$ where $M$ has $n$ digits and moving the last digit $d_0$ to the front results in multiplication by $m$, then the equation we must solve is:
$(10m-1)M = (10^n-m)d_0$For example, if we let $m=3$, then we have: $29M=(10^n-3)d_0$. Therefore, $29$ must divide $10^n-3$. The smallest $n$ for which this is true is $n=27$. This is easy to test computationally. Since there are only 28 possible integers modulo 29, you never have to test more than 28 numbers and if you don’t find an answer, there does not exist one. The next step is to look for $M$, which is equal to $M=\frac{10^{27}-3}{29}d_0$. This number must have $n=27$ digits. If we let $d_0=1$ or $d_0=2$, it only has 26 digits. We first achieve 27 digits when we let $d_0=3$, so that’s the answer. The final answer is therefore:
$10M+d_0 = 1034482758620689655172413793$and this is the same solution you found.
|
2023-04-01 01:24:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7159003019332886, "perplexity": 407.3182543614532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949694.55/warc/CC-MAIN-20230401001704-20230401031704-00014.warc.gz"}
|
https://fuza.co.uk/category/mathematics/
|
# Lean Manufacturing: Limiting innovation?
I’ve been running a “lean” business for 6 months now, and I’ve noticed that Lean Manufacturing principles applied to software development could lead to bad business. Let me explain:
Primarily, my concern centers around the lean manufacturing principle of waste reduction. The constant strive to reduce waste makes sense in an industrial production line, but does it make sense in a startup or exploratory environment?
### An example
Let’s say there are 2 possible features you can work on. Feature 1 has a 90% chance of delivering £1 value, and Feature 2 has 10% chance of delivering £100 value.
Let’s say that the features take the same time to develop. From the maths,
Feature 1 “value” = 90% of £1 = £0.90
Feature 2 “value” = 10% of £100 = £10
And yet even with this understanding, because of the implicit risk and waste aversion of lean, we would say “there’s a 90% chance Feature 2 will be wasteful, whereas Feature 1 is sure to not be wasteful, therefore Feature 1 is a better idea”.
### Good outcomes, but not as good as they could be
The waste reduction aspect of lean manufacturing gives us a local optimisation, much like gradient descent. Imagine a ball on a hill, which will roll downhill to find the bottom. This is ok, and it will find a bottom (of the valley), but maybe not the bottom (of the world, maybe the Mariana trench). In that sense it is locally good, but not globally optimal.
The way mathematicians sometimes get round this is by repeatedly randomly starting the ball in different places: think a large variety of lat-longs. Then you save those results and take the best one. That way you are more likely to have found a global optimum.
So I’m wondering about whether this kind of random restarting makes sense in a startup world too. I guess we do see it in things like Google’s acquisitions of startups, Project Loon, etc. Perhaps we/I should be doing more off-the-wall things.
### Closing commentary
Perhaps it isn’t so odd that Lean Manufacturing has “reduce waste” as a principle… In a production line environment, reduction of waste is the same as increasing value.
Still, if the optimisation problem is “maximise value” this leads to different outcomes than “minimise waste”. I would argue we should, in almost every case, be focusing on maximising value instead.
As we’ve seen with following the rituals rather than the philosophy and mindset of agile, it is beneficial to actually think about what we’re doing rather than applying things without understanding.
# Climate Change
In the past week, I’ve been to some excellent talks. The first was on Biomarkers at the Manchester Literary and Philosophical Society, and the second was Misinformation in Climate Change at Manchester Statistical Society. And both of these followed the IMA’s Early Career Mathematicians conference at Warwick, which had some excellent chat and food for thought around Big Data and effective teaching in particular.
Whilst I could share my learnings about biomarkers for personalised medicine, which makes a lot of sense and I do believe it will help the world, instead I will focus on climate change. It was aimed at a more advanced audience and had some excellent content, thanks Stephan Lewandowski!
There are a few key messages I’d like to share.
### Climate is different to weather
This is worth being clear on: climate is weather over a relatively long period of time. Weather stations very near to one another can have very different (temperature) readings over time. Rather than looking at the absolute value, if you instead look at the changes in temperature you will be able to find correlations. It is these that give us graphs such as:
### Misinformation
Given any time series of climate, it is possible to find local places where the absolute temperate trend goes down, particularly if you can pick the time window.
Interestingly, Stephan’s research has showed that belief in other conspiracy theories, such as that the FBI was responsible for the assassination of Martin Luther King, Jr., was associated with being more likely to endorse climate change denial. Presumably(?) this effect is related to Confirmation Bias. If you’re interested in learning more, take a look at the Debunking Handbook.
### Prediction is different to projection
According to Stephan, most climate change models are projections. That is, they use the historical data to project forward what is likely to happen. There are also some climate change models which are predictions, in that they are physics models which take the latest physical inputs and use them to predict future climate. These are often much more complex…
### Climate change is hard to forecast
I hadn’t appreciated also how difficult to forecast El Niño is. El Niño is warming of the eastern tropical Pacific Ocean, the opposite (cooling) effect being called La Niña. Reliable estimates for El Nino are available around 6 months away, which given the huge changes that happen as a result I find astonishing. The immediate consequences are pretty severe:
As you can see from the above infographic, it turns out that El Niño massively influences global temperatures. Scientists are trying to work out if there is a link between this and climate change (eg in Nature). Given how challenging this one section of global climate is, it is no wonder that global climate change is extremely difficult to forecast. Understanding this seems key to understanding how the climate is changing.
### The future
In any case, our climate matters. In as little as 30 years (2047), we could be experiencing climatically extreme weather. Unfortunately since CO2 takes a relatively long time to be removed from the atmosphere, even if we stopping emitting CO2 today we would still have these extreme events by 2069. Basically, I think we need new tech.
# Open Data
In a previous post, several months ago, we talked about Chaos and the Mandelbrot Set: an innovation brought about by the advent of computers.
In this post, we’ll talk about a present-day innovation that is promising similar levels of disruption: Open Data.
Open Data is data that is, well, open, in the sense that it is accessible and usable by anyone. More precisely, the Open Definition states:
A piece of data is open if anyone is free to use, reuse, and redistribute it – subject only, at most, to the requirement to attribute and/or share-alike
The point of this post is to share some of the cool resources I’ve found, so the reader can take a look for themselves. In a subsequent post, I’ll be sharing some of the insights I’ve found by looking at a small portion of this data. Others are doing lots of cool things too, especially visualisations such as those found on http://www.informationisbeautiful.net/ and https://www.reddit.com/r/dataisbeautiful/.
### Sources
One of my go-to’s is data.gov.uk. This includes lots of government-level data, of varying quality. By quality, I mean usability and usefulness. For example, a lat-long might be useful for some things, a postcode or address for other things, or an administrative boundary for yet others. This means it can be very hard to “join” the data together, as the way they store something like “location” is many different ways. I often find myself using intermediate tables that map lat-long into postcodes etc., which takes time and effort (and lines of code).
Another nice meta-source of datasets is Reddit, especially the datasets subreddit. There is a huge variety of data there, and people happy to chat about it.
For sample datasets, I use the ones that come with R, listed here. The big advantage with these is they are neat and tidy, so they don’t have missing values etc and are nicely formatted. This makes them very easy to work with. These are ideal for trying out new techniques, and are often used in worked examples of methods which can be found online.
Similarly useful are the kaggle datasets, which cover loads of things from US election polls to video games sales. If you are inclined they have competitions which can help structure your exploration.
A particularly awesome thing if you’re into social data is the European Social Survey. This dataset is collected through a sampled survey across Europe, and is well established. It is conducted every 2 years, since 2002, and contains loads of cool stuff from TV watching habits to whether people voted. It is very wide (ie lots of different things) and reasonably long (around 170,000 respondents), so great fun to play with. They also have a quick analysis tool online so you can do some quick playing without downloading the dataset (it does require signing up by email for a free login).
### Why is Open Data disruptive?
Thinking back to the start of the “information age”, the bottleneck was processing. Those with fast computers had the ability to do stuff noone else could do. Technology has made it possible for many people to get access to substantial processing power for very cheap.
Today the bottleneck is access to data. Google is making their business around mastering the world’s data. Facebook and twitter are able to exist precisely because they (in some sense) own data. By making data open, we start to be able to do really cool stuff, joining together seemingly different things and empowering anyone interested. Not only this, but in the public sector, open data means citizens can better hold government officials to account: no bad thing. There is a more polished sales pitch on why open data matters at the Open Data Institute (and they also do some cool stuff supporting Open Data businesses).
### Some dodgy stuff
There are obviously concerns around sharing personal data. Deepmind, essentially a branch of Google at this point, has very suspect access to unanonymised patient data. Google also recently changed the rules, making internet browsing personally identifiable:
We may combine personal information from one service with information, including personal information, from other Google services – for example to make it easier to share things with people you know. Depending on your account settings, your activity on other sites and apps may be associated with your personal information in order to improve Google’s services and the ads delivered by Google.
We’ve got to watch out, and as ever be mindful about who and what we allow our data to be shared with. Sure, this usage of data makes life easier… but at what privacy cost.
# DNA sequencing: Creating personal stories
Data matters. A great example of a smart use of data is genetic sequencing. This involves 3 billion base pairs, although scientists only know what around 1% of these do. The arguably most important ones are to do with creating proteins. By looking at people with traits, diseases or ancestry, scientists have been able to pick out those sets of genes which seem to match with those attributes. For example, breast cancer risk is 5 times higher if you have a mutation in either of the tumour-suppressing BRCA1 and BRCA2 genes.
Due to science, there are now commercial providers of DNA sequencing available, such as 23andme. They market this as a way to discover more about your ancestry and any genetic health traits you might want to watch out for. To try this out, I bought a kit to see how they surfaced the data in an understandable way. The process itself is really easy, you just give them money and post a tube of your spit to them.
After a few weeks wait for them to process it, you can look at your results. Firstly, you have your actual genetic sequencing. This is perhaps really only of interest (or any use) to geneticists. As part of their service, 23andme pull out the “interesting” parts of the DNA which have been shown (through maths/biology) to correspond to particular traits or ancestry.
They separate this out into:
• Health:
• Genetic risks
• Inherited conditions
• Drug response
• Traits (eg hair colour or lactose tolerance)
• Ancestry:
• Neanderthal composition
• Global ancestry (together with a configurable level of “speculativeness”)
• Family tree (to find relatives who have used the service too)
Part of what is smart about this service is that while it uses DNA as underlying data, it almost entirely hides this from the end user. Instead, they see the outcome for them. They have realised that people don’t care about a sequence like “agaaggttttagctcacctgacttaccgctggaatcgctgtttgatgacgt” but they do care about whether they have a higher risk of Alzheimer’s. Because some of these things are probabilistic, they also put a 1*-4* scale of “Confidence”: again this is easy to read at a glance. It isn’t very engaging, but it looks something like this:
Perhaps more visually interesting is the ancestry stuff. Apologies that my ancestry isn’t very exciting:
I hope this has been interesting. Commercial DNA sequencing is a real success story not just for biochemistry and genetics, but also for the industrialisation of these processes and the mathematics and software that makes it possible. The thing that is especially cool, according to me at least, is the ability to make something as complex as genetics accessible, understandable and useful.
# Proof: Little’s Law (why to Limit WIP)
Little’s Law states that:
The average number of customers in a queuing system = ( the rate at which customers enter the system ) x (the average time spent in the system)
Typically this might be applied to things like shoppers in a supermarket, but here we will focus on the application to software development. In a software development world, we often write it the same statement with different words, thinking about tasks:
Average Work in Progress = Average Throughput x Average Leadtime
Little’s law is beautifully general. It is “not influenced by the arrival process distribution, the service distribution, the service order, or practically anything else”[1]. This almost makes it self-evident, and since it is a mathematical theorem perhaps this is correct, since it is true in and of itself. Despite being so simple to describe, the simplest generalised proof that I have been able to find (and which we will not tackle here) is however trickier, since a solid grasp on limits and infinitesimals is required. Instead, we will consider a restricted case, suitable for most practical and management purposes, which is the same equation, with the condition that every task starts and finishes within the same finite time window. The mathematical way of saying this is that the system is empty at time t = 0 and time t = T, where 0<T<∞. A diagram to show this system might look something like this:
### Proof
n(t) = the number of items in the system at time t
N = the number of items that arrive between t = 0 and t = T
λ = the average arrival rate of items between t = 0 and t = T. The arrival rate is equal to the departure rate (sometimes called throughput), since the system is empty at the beginning and the end.
L = the average number of items in the system between t = 0 and t = T. This is sometimes called “average Work in Progress (WIP)”
W = the average time items spend in the system between t = 0 and t = T. This is called W as a shorthand for wait time, but in software development we might call this leadtime
A = area under n(t) between t = 0 and t =T. This is the sum of all the time every item has spent queuing.
Using this notation, Little’s law becomes
L = λ x W
which we will prove now. The following equations can be assembled from these definitions. We will need to use these to assemble Little’s Law.
1. L = A/T (average number of items in the system = sum of time spent / total time)
2. λ = N/T (average arrival rate = total number of items / total time, since every item leaves before t=T)
3. W = A/N (average time in system = sum of all time spent / number of items)
We can now use these three equations to prove Little’s Law:
L = A/T from (1)
= (A/T)x(N/N) since N/N = 1
= (N/T)x(A/N) by rearranging fractions
= λ x W from (2) and (3)
This is what we wanted, so the proof is complete.
### What does this mean?
A trick to getting good outcomes from Little’s Law is understanding which system we want to understand.
If we consider our queuing system to be our software development team, our system is requirements coming in, then being worked on and finished. In this case, W is the development time, and each item is a feature or bug fix, say.
To have a quicker time to market, and to be able to respond to change more quickly, we would love for our so-called “cycle time” W to be lower. If the number of new features coming into our system is the same, then we can achieve that by lowering L, the average work in progress. This is part of why Kanban advocates “limiting work in progress”.
Alternatively, we can consider our queuing system to be the whole software requirement generation, delivery, testing and deployment cycle. In this case, we might have W being the time taken between a customer needing a software feature to it being used by them. By measuring this, we get a true picture of time to market (our new W, which is true measure of “lead time”), and we with some additional measurements we would be able to discover the true cost of the time spent delivering the feature (since our new A is means total time invested).
Outside of the development side of software, we can apply Little’s Law to support tickets. We can, for example, state how long a customer will on average have to wait for their query to be closed, by looking at the arrival rate of tickets and the number of items in the system. If there are on average 10 items in the queue and items arrive at 5 per hour, the average wait time will be 2 hours, since the rearrangement of Little’s Law to L/λ = W gives us 10 / 5 = 2.
I hope that was interesting, if you would like me to explain the proof in the general case, let me know in the comments. I think it would be about 10 pages for me to explain, so in the spirit of lean I will only do this if there is a demand for it.
# Thoughts on Quantum Computing and engaging people into science
Recently, the Prime Minister of Canada Justin Trudeau amazed journalists by giving a short explanation of quantum computing. In the weeks after, many articles about quantum computing were written commending the president for explaining so eloquently this impenetrable new science. And while it is very exciting to have so many people engage with what is considered the next great frontier of computing, some of the explanations were rather disappointing, and because of the (perceived) complexity of quantum computing it is very easy to give the impression that it is even more magical and mysterious than it really is.
The biggest misconception that propagated was that the miracle of quantum computing is down to the wave-particle duality of fundamental components such as the proton, neutron, electron etc. The key to quantum computing in fact relies on the superposition of states: taking for example a proton, this has a property (which we don’t need to go into but you can read all about) called spin, which when measured in a laboratory will always be “up” or “down”. The fact that a stream of protons can also act as a wave is not the most relevant fact here.
So just quickly: what of the fact that spin is always measured as one of two states? It is that the only way of modelling mathematically the spin of a proton involves inherent randomness. It is possible to put the proton in an equal superposition of both up and down, with the binary result of an experiment only being resolved when actually measured, both up and down equally likely outcomes. Until then, the spin state is genuinely both up and down equally. But the very meaning of the word “quantum” implies the existence of distinct values and therefore no measurement can actually reveal the superposition we know is there. Rather, in a sense, we force the universe into making a decision at the last possible moment!
So Justin Trudeau and various journalists are right in saying that a bit is binary but a qubit (quantum bit) can hold multiple values at the same time, but it is not the wave-particle duality that underlies this phenomenon.
Quantum computing is obviously far too complex a subject to go into further, but I strongly encourage you to do your own research. It is fantastic that this subject has been given more media attention, but any subject that captures your interest always deserves further research beyond journalistic simplifications. This post isn’t so much about quantum computing itself, but a reminder that initial engagement is just the first step!
Written by William A. Lebreton.
A few weeks ago, I attended the Presidential Address at the Institute of Mathematics and it’s Applications (IMA). Using this, the IMA President Chris Linton argued the need for bridging the distinctions between “Pure” and “Applied” mathematics. Without giving too many spoilers, he also put the case that the “success path” for people at university is often seen as further academic study, eventually leading to professorship. Progressing into being a teacher, actuary, or any other job is culturally, perhaps, seen as a failure. In summary, I took Chris’ Address as a call for action: for changing the culture of success in mathematics to being more balanced across academic, industrial, educational, and other options. If you’re interested in mathematics, I would recommend going to one of the branch meetings where Chris will be repeating this talk.
It is this same belief that led me to start full time working on Fuza a little over 3 months ago. You can see an example of maths applied to industry in this post. Indeed perhaps more broadly, I believe we should be using the science we already know:
• It has been shown recently that 30 minutes of visits of green spaces reduces instances of depression and high blood pressure [source].
• In another study, countryside walks were associated with reduced rumination (associated again with depression). [source]
• There is a weight of science supporting the benefits of urban trees [source]
To me, it seems logical and worthwhile that some city should conduct a practical experiment to see if we could reduce depression, perhaps by simply planting some trees. It seems sensible to conduct a small trial, in order to get some fast feedback. The alternative to a local trial is either to do nothing, or wait for a central government policy to implement it everywhere.
In my town of Letchworth Garden City (UK), we are lucky to have a heritage foundation who do lots of good work, for example converting one of our 100-year-old houses into an eco-home [source]. Experiments do happen, but I would love to see more.
As a society we are, according to me at least, not implementing enough of these experiments. I have written this blog in an attempt to inspire some of my local decision makers to try more things for the benefit of their fellow citizens.
This is of course just one piece of science that I think it would be interesting to implement. Perhaps there are things you care about that we should be experimenting/implementing within our communities or businesses. I hope we can all do our bit to help get science used in reality. In the meantime, perhaps I will go out for a countryside walk.
# Pretty maths
Bear with this post as it goes through some equations at the beginning, but it is worth it. We’ll be doing some of the calculations to get this picture:
This is the set of numbers “c” such that is bounded. These z are complex numbers, which we’ll ignore for now. It is much easier to understand if we look at some examples:
Let’s say c = -1.
We start with $z_0 = 0$
$z_1 = z_0^2 + c = 0^2 - 1 = -1$
$z_2 = z_1^2 + c = (-1)^2 - 1 = 0$
$z_3 = z_2^2 + c = (0)^2 - 1 = -1$
This is repeating, and the numbers are bounded.
Let’s now try c = 0.5.
We start with $z_0 = 0$
$z_1 = z_0^2+c = 0^2 + 0.5 = 0.5$
$z_2 = z_1^2 +c = (0.5)^2 + 0.5 = 0.75$
$z_3 = z_2^2+c = (0.75)^2 + 0.5 = 1.0625$
$z_4 = z_3^2 + c = (1.0625)^2 + 0.5 = 1.62890625$
$z_5 = z_4^2 + c = (1.62890625)^2 + 0.5 = 3.15333557129$
$z_6 = ...$
We can see that these numbers are getting bigger and bigger, and it is not bounded.
One more: c=-1.9
$z_0 = 0$
$z_1 = z_0^2 + c = 0^2 - 1.9 = -1.9$
$z_2 = z_1^2 + c = (-1.9)^2 - 1.9 = 1.71$
$z_3 = z_2^2 + c = (1.71)^2 - 1.9 = 1.0241$
$z_4 = z_3^2 + c = (1.0241)^2 - 1.9 = -0.85121919$
$z_5 = z_4^2 + c = (-0.85121919)^2 - 1.9 = -1.17542589058$
$z_6 = ...$
It bounces around a lot, never getting very big or very small, so it is bounded. It is kinda fun to sit with a calculator and try this.
Mathematicians call this kind of system “chaos”, as it is very sensitive to the starting conditions. Sometimes this is called the butterfly effect. Note that chaotic is not the same as random: in chaotic systems if you know everything about the initial conditions you know what will happen, whereas in random systems even if you knew everything about the initial conditions you wouldn’t know what was going to happen.
Benoit Mandelbrot was one of the first mathematicians to have access to a computer. Hopefully you can also see now why Benoit Mandelbrot needed a computer to work these out. He repeated this for lots of values of c. The pretty picture we started with is really a plot of the set of c (called the Mandelbrot set), where the colours indicate what happens to the sequence (eg how quickly it converges, if it does).
You can zoom into the colourised picture to see how complex this is here. Lots of people (me included) think it is pretty cool. It is really worth taking a look to appreciate the complexity.
## Other than being pretty, why does this matter?
Stepping back: This picture is made from the formula . This is so simple, and yet gives rise to infinite complexity. In the words of Jonathan Coulton,
Infinite complexity can be defined by simple rules
Benoit Mandelbrot went on to apply this to the behaviour of economic markets, among other things. Later people have applied this to fluid dynamics (video), medicine, engineering, and many other areas. Apparently there is even a Society for Chaos Theory in Psychology & Life Sciences..!
Orley Ashenfelter, an economist at Princeton, wanted to guess the prices that different vintages of Bordeaux wine would have. This prediction would be most useful at the time of picking, so that investors can buy the young wine and allow it to come of age. In his own words:
The goal in this paper is to study how the price of mature wines may be predicted from data available when the grapes are picked, and then to explore the effect that this has on the initial and final prices of the wines.
For those of you not so au-fait with wine, prices vary a lot. At auction in 1991, a dozen bottles from Lafite vineyard were bought for:
• \$649 for a 1964 vintage
• \$190 for a 1965 vintage
• \$1274 for a 1966 vintage
Wines from the same location can vary by a factor of 10 between different years. Before Ashenfelter’s paper, people predicted wine quality by experts, who tasted the wine and then guessed how good it would be in future. Ashenfelter’s great achievement was to bring some simple science to this otherwise untapped field (no pun intended).
He started by using the things that were “common knowledge”: in particular that weather affects quality and thus selling price. He checked this by looking at the historical data:
In general, high quality vintages for Bordeaux wines correspond to the years in which August and September are dry, the growing season is warm, and the previous winter has been wet.
Ashenfelter showed that 80% of price variation could be down to weather, and the remaining 20% down to age. With the given inputs, the model he built was:
log(Price) = Constant + 0.238 x Age + 0.616 x Average growing season temperature (April-September) -0.00386 x August rainfall + 0.001173 x Prior rainfall (October-March)
As it turned out, this simple model was better at guessing quality than the “wine expert”: a success for science against pure intuition. The smart part of his approach was getting insight in to the things people felt mattered (weather) and checking that wisdom. Here, he showed that yes it is quite appropriate to use weather and age to model wine prices.
Through the age variable, it also gives an average 2-3% annual return on investment [1] (note this is pre-2008 so is unlikely to behave like this today[2]).
Should I buy wine? Quite possibly, as long as I don’t drink it all.
|
2020-09-23 23:00:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4239518344402313, "perplexity": 1118.5745855980358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400212959.12/warc/CC-MAIN-20200923211300-20200924001300-00043.warc.gz"}
|
https://puzzlecritic.wordpress.com/tag/algebra/
|
## Pairs and squares
This puzzle, which I posted to twitter (@puzzlecritic) a few weeks ago, is one of my variations on a problem that appeared in a mathematical discussion group on LinkedIn:
Find three different positive integers such that the sum of any two is a square.
## The Vault: Powers of two
My new book of maths puzzles is on its way! It’s packed full of interesting problems to sink your teeth into. I’ll post an update as the launch approaches.
In the meantime, here is a fantastic problem from the USA:
It is given that $2^{2004}$ is a 604-digit number beginning with a 1. How many of the numbers $2^0, 2^1, 2^2, 2^3, ..., 2^{2003}$ begin with a 4?
## Shares
From the 2004 Tournament of Towns:
Each day, the price of the shares of the corporation “Soap Bubble, Limited” either increases or decreases by n%, where n is an integer such that 0<n<100. The price is calculated with unlimited precision. Does there exist an n for which the price can take the same value twice?
## 42 Points
From the 2007 Australian Maths Competition:
There are 42 points $P_1, P_2, ..., P_{42}$ , placed in order on a straight line so that each distance from $P_i$ to $P_{i+1}$ is $\dfrac{1}{i}$ for $1\leq i\leq 41$. What is the sum of the distances between every pair of these points?
## Rugs
From the 1998 University of Waterloo Fermat Contest:
Three rugs have a combined area of 200 sq m. By overlapping rugs to cover a floor of area 140 sq m, the area which is covered by exactly two layers of rug is 24 sq m. What area of floor is covered by three layers of rug?
## A curious operation
From the Putnam Competition (I believe):
Suppose # is a binary operation on a set S such that the following properties hold:
1. For all a, b, c in S, (a#b)#c = a#(b#c);
2. For all a, b in S, if a#b = b#a then a=b.
Prove that, for all x, y, z in S, we have x#y#z = x#z.
|
2017-08-20 19:17:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6995819807052612, "perplexity": 786.0785069565169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106984.52/warc/CC-MAIN-20170820185216-20170820205216-00427.warc.gz"}
|
https://nuclearchickencollusion.wordpress.com/2013/03/08/what-do-kittens-have-to-do-with-rising-tuition/
|
# What do kittens have to do with rising tuition?
If you read the Financial Times, you might suspect from an article on Monday that kittens have something to do with rising tuition and Prisoners’ Dilemmas. Let me assure you that they don’t.
A friend of mine sent me the article, which cites a model designed by a team of Bank of America consultants who use the Prisoners’ Dilemma to explain rising college tuition. Here is the graphic they used:
Fig. 1: Things that are pairwise irrelevant to each other:
a kitten, the Prisoner’s Dilemma, and rising tuition.
They explain that the college ranking system (assuming two colleges) is a zero-sum game. If one college moves up, the other one moves down. “A college can move up in the rankings if it can raise tuition and therefore invest in the school by improving the facilities, hiring better professors and offering more extracurricular activities.” And therefore, they conclude, this is why college tuitions have been rising and why student debt will continue to rise.
First glaring problem: (raise, raise) is a Pareto-optimal outcome as they’ve set up this game, but what they probably meant to say was that it is a Nash equilibrium. Or maybe they meant to say that “raise” is the best response for each college. Anyway, in this game, (don’t raise, don’t raise) is also Pareto-optimal (but not a Nash equilibrium)!
Secondly, they’re trying to illustrate a kind of ratcheting problem: both colleges raise tuition to raise the quality of the resources at the school, in order to maintain their rankings. But, this means it’s a repeated game. In repeated games that have a finite horizon, defection happens at every step, but at infinite horizon games, cooperation can occur. Now, let’s just assume that this is an infinite horizon game, which is what the folks at B of A are assuming when they predict that college tuition will keep rising indefinitely, beyond mere inflation. What incentive is there to cooperate and keep tuition low? According to this game, none. And according to what you might expect in reality, none – is it plausible that, in the absence of antitrust laws, that colleges would want to collude to keep tuition low, and that because they can’t collude, they are doomed to raise tuition every year against their wills? Nope.
Then, we come to the matter that in fact this game can’t be infinite horizon as it is presented here. The simple reason is that, even if education is becoming a larger and larger share of a household’s spending, and even if the student is taking out loans and borrowing against his future expected earnings, he still has a budget set that he can’t exceed. Furthermore, the demand for attending college at a particular university should drop as soon as the tuition exceeds the expected lifetime earning/utility advantage for whatever the student sees himself doing in 4 (or more) years over the alternative. So, there will be some stage at which the utilities change and it becomes a best strategy for neither school to increase its tuition. So, it’s a finite stage game and the increase will stop somewhere, namely, where price theory says it should. [1]
Finally, it’s not clear that increasing tuition actually has such a strong effect on school rankings or that colleges are in such a huge rankings race. And, even if students at colleges outside the very top schools tend to choose a college based on things like food quality and dorm rooms, students don’t demand infinitely luxurious college experiences at infinite prices. Evidence: Columbia students feel they’re overpaying for food, and feel entitled to steal Nutella.
The lessons here are these: It’s not a Prisoner’s Dilemma in a strong sense if the cooperative result isn’t strictly preferred to the Nash equilibrium. Don’t model a tenuous game where the game isn’t relevant to the ultimate result (tuitions will stop rising at some point). Don’t assume that trends are linear, when they are definitively not linear. And, don’t put a kitten on your figure just because you have some white space — it really doesn’t help.
———————
[1] Actually, the game doesn’t have to be finite horizon. Suppose the upper limit that the colleges know they can charge is $A$, and the current tuition is $B_t$. Then, at each stage, they could increase tuition by $(0.5)*(A - B_{t-1})$. But, as the tuition approaches A, the increases become smaller and smaller until they pretty much just vanish, and it would be the same as stopping, because there is a time at which the tuition would stop affecting rank (a college isn’t going to improve it’s rank by charging each student an extra cent.)
|
2018-01-23 00:11:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3766225576400757, "perplexity": 1282.8407998956845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891546.92/warc/CC-MAIN-20180122232843-20180123012843-00758.warc.gz"}
|
https://www.tutorialspoint.com/how-do-you-directly-overlay-a-scatter-plot-on-top-of-a-jpg-image-in-matplotlib
|
# How do you directly overlay a scatter plot on top of a jpg image in Matplotlib?
MatplotlibServer Side ProgrammingProgramming
To directly overlay a scatter plot on top of a jpg image, we can take the following steps −
• Load an image "bird.jpg", using imread() method, Read an image from a file into an array.
• Now display data as an image.
• To plot scatter points on the image make lists for x_points and y_points.
• Generate random numbers for x and y and append in lists.
• Using scatter method, plot x and y points.
• To display the figure, use show() method.
## Example
import numpy as np
from matplotlib import pyplot as plt
plt.rcParams["figure.figsize"] = [7.00, 3.50]
plt.rcParams["figure.autolayout"] = True
im = plt.imshow(data)
x_points = []
y_points = []
for i in range(10):
x_points.append(np.random.randint(0, 700))
y_points.append(np.random.randint(0, 700))
plt.scatter(x_points, y_points, c=x_points)
plt.show()
## Output
Updated on 06-May-2021 13:59:04
|
2022-08-17 22:48:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24624161422252655, "perplexity": 5841.348066584201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573118.26/warc/CC-MAIN-20220817213446-20220818003446-00700.warc.gz"}
|
https://gmatclub.com/forum/the-average-arithmetic-mean-of-9-scores-is-n-if-a-10th-score-is-the-280219.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 16 Oct 2019, 17:24
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# The average (arithmetic mean) of 9 scores is N. If a 10th score is the
Author Message
TAGS:
### Hide Tags
Senior PS Moderator
Joined: 26 Feb 2016
Posts: 3341
Location: India
GPA: 3.12
The average (arithmetic mean) of 9 scores is N. If a 10th score is the [#permalink]
### Show Tags
Updated on: 30 Oct 2018, 01:01
00:00
Difficulty:
15% (low)
Question Stats:
90% (01:07) correct 10% (01:06) wrong based on 38 sessions
### HideShow timer Statistics
The average (arithmetic mean) of 9 scores is N. If a 10th score is then included with the original 9, the average of the 10 scores is T. Which of the following expressions represents the value of the 10th score?
(A) $$10(T - N)$$
(B) $$10T - 9N$$
(C) $$\frac{10T - 9N}{10}$$
(D) $$\frac{10T - 9N}{9}$$
(E) $$\frac{10T - 9N}{2}$$
_________________
You've got what it takes, but it will take everything you've got
Originally posted by pushpitkc on 30 Oct 2018, 00:56.
Last edited by Bunuel on 30 Oct 2018, 01:01, edited 1 time in total.
Edited the question.
Intern
Joined: 06 Oct 2018
Posts: 16
Location: Oman
Concentration: Marketing, General Management
GMAT 1: 710 Q49 V38
GPA: 3.33
WE: Public Relations (Advertising and PR)
The average (arithmetic mean) of 9 scores is N. If a 10th score is the [#permalink]
### Show Tags
Updated on: 30 Oct 2018, 01:17
The average of the nine numbers is given by $$\frac{a1+a2+a3+...+a9}{9}=N$$.
Therefore, $$a1+a2+a3...+a9= 9N$$. (Let this be equation 1)
Now the average of the nine numbers and an added 10th number is given=T.
9
Therefore $$\frac{a1+a2+a3+...+a9+a10}{10}=T$$ (Let this be equation 2)
From equation 1, we can substitute a1+a2+...a9 with 9N in equation 2; giving us:
$$\frac{9N+a10}{10}=T.$$
Therefore, $$a10=10T-9N$$ (Choice B)
Originally posted by manavivarma on 30 Oct 2018, 01:13.
Last edited by manavivarma on 30 Oct 2018, 01:17, edited 1 time in total.
Manager
Joined: 01 Jan 2018
Posts: 80
Re: The average (arithmetic mean) of 9 scores is N. If a 10th score is the [#permalink]
### Show Tags
30 Oct 2018, 01:16
Sum initially=9*N
Sum finally=9N+x where x is the 10th term
Given,
(9N+x)/10=T
So, x=10T-9N
B should be the correct choice
Sent from my Redmi Note 3 using GMAT Club Forum mobile app
_________________
+1 Kudos if you find this post helpful
Re: The average (arithmetic mean) of 9 scores is N. If a 10th score is the [#permalink] 30 Oct 2018, 01:16
Display posts from previous: Sort by
|
2019-10-17 00:24:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9058465361595154, "perplexity": 6896.683251686436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672431.45/warc/CC-MAIN-20191016235542-20191017023042-00279.warc.gz"}
|
https://space.stackexchange.com/questions/45930/how-to-fit-into-a-cubesat-29-trackable-high-drag-subsatellites-with-well-defin
|
# How to fit, into a cubesat, 29 trackable high drag subsatellites with well-defined aerodynamic profiles
Johnathan McDowell's recent tweet says:
The @AerospaceCorp Aerocube-10a cubesat carries 29 small passive high-drag subsatellites used to probe the density of the upper atmosphere. 3 have been ejected so far and the 2nd one reentered today. Aerocube-10a and 10b remain close to their deployment orbits.
In order to probe atmospheric density (I'm assuming passively, not with telemetry and sensors) these will have to have a reasonably large drag and a well-defined and stable aerodynamic profile, and also be easily tracked for orbit measurements. They will also have to be initially small enough so that 29 of them can fit into a cubesat, which we now know can be no larger than 27U.
How are all these requirements simultaneously satisfied?
• @CamilleGoudeseune Thank you for the help, that's much better. – uhoh Aug 10 '20 at 2:10
• What is Aerocube-10b's relationship to Aerocube-10a? – Wayne Conrad Aug 10 '20 at 20:49
• @WayneConrad I don't know, I've just quoted JM's tweet since he's a well-recognized authority on these kinds of things. If it's not clear from a quick search then perhaps it's ripe for asking a s a new question! – uhoh Aug 10 '20 at 22:49
There is an outline of the design here:
Each probe weighs 16 grams and consist of three 98 mm diameter aluminum sheets at 90 degrees to each other, effectively forming a sphere. The intent is to be lightweight and have a constant cross section, independent of orientation to the velocity direction so that atmospheric drag can be measured in-situ. RF modeling predicts that the atmospheric probes will have a radar cross section equal to 1U CubeSats, which have been tracked on-orbit many times.
The designers describe them as:
...a set of 28 individually releasable atmospheric probes. These lightweight circular probes, similar in size to CDs, spring open into spherical objects. Due to the probes’ large surface areas being exposed to the atmosphere, they lose altitude quickly and burn up in a matter of weeks.
The second link has a helpful picture - they look like the X/Y/Z planes crossing the a sphere, and it makes sense that this would mean a constant circular aerodynamic cross-section regardless of how it tumbles around. This shape also makes them a corner reflector, which means that they will be easily trackable by radar.
How do 28 of these fit?
The cubesat carrying these is only 1.5 U. If they were real spheres each with a volume of $$\frac{4}{3} \pi r^3$$ with a close packed volume fraction of $$\frac{\pi}{3 \sqrt{2}}$$ they would need 18.6 U.
However, the probes are stored flat and then pop open when deployed. This explains how they can all fit in - 98mm diameter would fit nicely inside a standard cubesat of 1.5U size. Each of the three sheets is probably less than 0.5mm thick, judging by the weight, so there is plenty of space for 29 of them to be stored one alongside another and still leave room for the rest of the workings of the satellite.
• The X Y Z planes also form nice corner-cube reflectors, so the reflected tracking signal will be independent of the orientation as well. (Corner cubes have been used in maritime radar systems for decades, of course, for the same basic reason). – alephzero Aug 10 '20 at 1:26
• @alephzero yes indeed! These are beautiful little devices :-) That should be part of the answer, or a second answer! – uhoh Aug 10 '20 at 2:08
• @alephzero added, thanks! I had forgotten quite how corner-cubes work, but you're quite right that this should make them neatly trackable. I guess this offsets the fact that they're a little smaller than a standard 1U cube. – Andrew Aug 10 '20 at 15:23
|
2021-04-23 02:16:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6173205375671387, "perplexity": 1648.6590403611797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039626288.96/warc/CC-MAIN-20210423011010-20210423041010-00042.warc.gz"}
|
https://stats.stackexchange.com/tags/stepwise-regression/hot
|
# Tag Info
338
I think this approach is mistaken, but perhaps it will be more helpful if I explain why. Wanting to know the best model given some information about a large number of variables is quite understandable. Moreover, it is a situation in which people seem to find themselves regularly. In addition, many textbooks (and courses) on regression cover stepwise ...
71
Check out the caret package in R. It will help you cross-validate step-wise regression models (use method='lmStepAIC' or method='glmStepAIC'), and might help you understand how these sorts of models tend to have poor predictive performance. Furthermore, you can use the findCorrelation function in caret to identify and eliminate collinear variables, and the ...
39
I fully concur with the problems outlined by @gung. That said, realistically speaking, model selection is a real problem in need of a real solution. Here's something I would use in practice. Split your data into training, validation, and test sets. Train models on your training set. Measure model performance on the validation set using a metric such as ...
33
after performing a stepwise selection based on the AIC criterion, it is misleading to look at the p-values to test the null hypothesis that each true regression coefficient is zero. Indeed, p-values represent the probability of seeing a test statistic at least as extreme as the one you have, when the null hypothesis is true. If $H_0$ is true, the p-value ...
30
I would recommend trying a glm with lasso regularization. This adds a penalty to the model for number of variables, and as you increase the penalty, the number of variables in the model will decrease. You should use cross-validation to select the value of the penalty parameter. If you have R, I suggest using the glmnet package. Use alpha=1 for lasso ...
27
As I explained in my comment on your other question, step uses AIC rather than p-values. However, for a single variable at a time, AIC does correspond to using a p-value of 0.15 (or to be more precise, 0.1573): Consider comparing two models, which differ by a single variable. Call the models $\cal{M}_0$ (smaller model) and $\cal{M}_1$ (larger model), and ...
23
To expand on Zach's answer (+1), if you use the LASSO method in linear regression, you are trying to minimize the sum a quadratic function and an absolute value function, ie: $$\min_{\beta} \; \; (Y-X\beta)^{T}(Y-X\beta) + \sum_i |\beta_i|$$ The first part is quadratic in $\beta$ (gold below), and the second is a square shaped curve (green below). The ...
20
There are a few different issues here. Probably the main issue is that model selection (whether using p-values or AICs, stepwise or all-subsets or something else) is primarily problematic for inference (e.g. getting p-values with appropriate type I error, confidence intervals with appropriate coverage). For prediction, model selection can indeed pick a ...
18
I would recommend not performing stepwise model building, unless you are looking for biased (inflated) coefficients, biased (deflated) p-values, and inflated model fit statistics. The fundamental problem is that all of the inferences in one's final model carry a typically invisible/silent and usually uninterpretable series of "conditional upon all these ...
16
To answer the question, there are several options: 1) all-subset by AIC/BIC 2) stepwise by p-value 3) stepwise by AIC/BIC 4) regularisation such as LASSO (can be based on either AIC/BIC or CV 5) genetic algorithm (GA) 6) others? 7) use of non-automatic, theory ("subject matter knowledge") oriented selection Next question would be which method is better. ...
16
Model averaging is one way to go (an information-theoretic approach). The R package glmulti can perform linear models for every combination of predictor variables, and perform model averaging for these results. See http://sites.google.com/site/mcgillbgsa/workshops/glmulti Don't forget to investigate collinearity between predictor variables first though. ...
15
The LASSO and forward/backward model selection both have strengths and limitations. No far sweeping recommendation can be made. Simulation can always be explored to address this. Both can be understood in the sense of dimensionality: referring to $p$ the number of model parameters and $n$ the number of observations. If you were able to fit models using ...
15
The primary advantage of stepwise regression is that it's computationally efficient. However, its performance is generally worse than alternative methods. The problem is that it's too greedy. By making a hard selection on the the next regressor and 'freezing' the weight, it makes choices that are locally optimal at each step, but suboptimal in general. And, ...
13
Here is a counter example using randomly generated data and R: library(MASS) library(leaps) v <- matrix(0.9,11,11) diag(v) <- 1 set.seed(15) mydat <- mvrnorm(100, rep(0,11), v) mydf <- as.data.frame( mydat ) fit1 <- lm( V1 ~ 1, data=mydf ) fit2 <- lm( V1 ~ ., data=mydf ) fit <- step( fit1, formula(fit2), direction='forward' ) ...
12
The facts that you are getting different answers from forward and backward selection, and that you get different answers when you change the seed, should give you pause. Clearly, these can't all be right. Most likely, none of them are. The simplest answer is that you should not use these methods at all. Here are some threads you might want to read: ...
11
I would not recommend you use that procedure. My recommendation is: Abandon this project. Just give up and walk away. You have no hope of this working. source for image Setting aside the standard problems with stepwise selection (cf., here), in your case you are very likely to have perfect predictions due to separation in such a high-dimensional ...
11
Regression based on principal components analysis (PCA) of the independent variables is certainly a way to approach this problem; see this Cross Validated page for one extensive discussion of pros and cons, with links to further related topics. I don't see the point of the regression you propose after choosing the largest components. The "reconstructed" ...
11
I am not aware of situations, in which stepwise regression would be the preferred approach. It may be okay (particularly in its step-down version starting from the full model) with bootstrapping of the whole stepwise process on extremely large datasets with $n>>p$. Here $n$ is the number of observations in an continuous outcome (or number of records ...
11
Two cases in which I would not object to seeing step-wise regression are Exploratory data analysis Predictive models In both these very important use cases, you are not so concerned about traditional statistical inference, so the fact that p-values, etc., are no longer valid is of little concern. For example, if a research paper said "In our pilot study, ...
11
Do not use step-wise regression. Because step-wise regression almost certainly will insure biased results. All statistics produced through step-wise model building have a nested chain of invisible/unstated "conditional on excluding X" and/or "conditional on including X" statements built into them with the result that: p-values are biased variances are ...
10
Your question has an implicit assumption that $R^2$ is a good measure of the quality of the fit and is appropriate for comparing between models. I think that your background information provides evidence that $R^2$ is not a good tool for what you are trying to do. After all, you can increase $R^2$ by adding nonsense variables to your model. Did you take ...
9
I believe what you describe is already implemented in the caret package. Look at the rfe function or the vignette here: http://cran.r-project.org/web/packages/caret/vignettes/caretSelection.pdf Now, having said that, why do you need to reduce the number of features? From 70 to 20 isn't really an order of magnitude decrease. I would think you'd need more ...
8
Interesting discussion. To label stepwise regression as statistical sin is a bit of a religious statement - as long as one knows what they are doing and that the objectives of the exercise is clear, it is definitely a fine approach with its own set of assumptions and, is certainly biased, and does not guarantee optimality, etc. Yet, the same can be said of ...
8
I gather it is the stepwise selection that is slowing you down, so you would speed up your code by skipping the stepwise selection. As it happens, I would suggest you do not use stepwise selection for other reasons as well. If you want to test hypotheses, stepwise selection will invalidate the reported $p$-values. If you want to build a predictive model, it ...
8
Using stepwise selection to find a model is a very bad thing to do. Your hypothesis tests will be invalid, and your out of sample predictive accuracy will be very poor due to overfitting. To understand these points more fully, it may help you to read my answer here: Algorithms for automatic model selection. The stepAIC function is selecting a model ...
8
An analogy may help. Stepwise regression when the candidate variables are indicator (dummy) variables representing mutually exclusive categories (as in ANOVA) corresponds exactly to choosing which groups to combine by finding out which groups are minimally different by $t$-tests. If the original ANOVA was tested against $F_{p-1, n-p-1}$ but the final ...
8
Stepwise selection is not generally a good idea. To understand why, it may help you to read my answer here: Algorithms for automatic model selection. As far as advantages go, in the days when searching through all possible combinations of features was too computationally intensive for computers to handle, stepwise selection saved time and was tractable. ...
8
Yes, stepwise methods invalidate inference in this setting. Variables are retained because either (1) they are truly strong or (2) their effects are mis-estimated to be too far from zero. This creates a selection ("publication") bias. Even more clearly, variable selection results in a biased-low estimate of $\sigma^2$ which you can almost see from just ...
8
As @ChrisUmphlett suggests, you can do this by stepwise reduction of a logistic model fit. However, depending on what you're trying to use this for, I would strongly encourage you to read some of the criticisms of stepwise regression on CV first. There are certain very narrow contexts in which stepwise regression works adequately (e.g. simplifying an ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2020-01-24 07:25:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5272380113601685, "perplexity": 838.3823463549961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250616186.38/warc/CC-MAIN-20200124070934-20200124095934-00438.warc.gz"}
|
https://projecteuclid.org/euclid.die/1372858566
|
## Differential and Integral Equations
### $G$-convergence for non-divergence second order elliptic operators in the plane
#### Abstract
The central theme of this paper is the study of $G$-convergence of elliptic operators in the plane. We consider the operator $$\mathcal{M}[u]=\text{Tr}(A(z) D^2u)=a_{11}(z)u_{xx}+2a_{12}(z)u_{xy}+a_{22}(z)u_{yy}$$ and its formal adjoint $$\mathcal{N}[v]=D^2(A(w)v)= (a_{11}(w)v)_{xx} + 2(a_{12}(w)v)_{xy}+ (a_{22}(w)v)_{yy},$$ where $u\in W^{2,p}$ and $v\in L^p$, with $p>1$, and $A$ is a symmetric uniformly bounded elliptic matrix such that $\text{det}A=1$ almost everywhere. We generalize a theorem due to Sirazhudinov--Zhikov, which is a counterpart of the Div-Curl lemma for elliptic operators in non-divergence form. As an application, under suitable assumptions, we characterize the $G$-limit of a sequence of elliptic operators. In the last section we consider elliptic matrices whose coefficients are also in $VMO$; this leads us to extend our result to any exponent $p\in (1,2)$.
#### Article information
Source
Differential Integral Equations, Volume 26, Number 9/10 (2013), 1127-1138.
Dates
First available in Project Euclid: 3 July 2013
Alberico, Teresa; Capozzoli, Costantino; D'Onofrio, Luigi. $G$-convergence for non-divergence second order elliptic operators in the plane. Differential Integral Equations 26 (2013), no. 9/10, 1127--1138. https://projecteuclid.org/euclid.die/1372858566
|
2018-06-23 00:59:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.754437267780304, "perplexity": 518.9377493757908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864848.47/warc/CC-MAIN-20180623000334-20180623020334-00452.warc.gz"}
|
https://www.neetprep.com/question/50229-figure-below-shows-north-south-poles-permanent-magnet-n-turn-coil-area-crosssection-resting-current-passed-coil-plane-coil-makes-angle-respect-tothe-direction-magnetic-field-B-plane-magnetic-field-thecoil-horizontal-vertical-respectively-torque-coil-will-bea-niABcosb-niABsinc--niAB-d-None-above-magnetic-field-radial/126-Physics--Magnetism-Matter/695-Magnetism-Matter
|
# NEET Physics Magnetism and Matter Questions Solved
The figure below shows the north and south poles of a permanent magnet in which n turn coil of area of cross-section A is resting, such that for a current i passed through the coil, the plane of the coil makes an angle with respect to the direction of magnetic field B. If the plane of the magnetic field and the coil are horizontal and vertical respectively, the torque on the coil will be
(a)τ = $niAB\mathrm{cos}\theta$
(b)τ = $niAB\mathrm{sin}\theta$
(c)τ = niAB
(d) None of the above, since the magnetic field is radial
Concept Videos :-
#3 | Magnetic Dipole Moment & Field due to Bar Magnet
#4 | Superposition of Magnetic Field due to Bar Magnet
#5 | Field due to Short Bar Magnet
#6 | Torque Acting on Bar Magnet in Uniform Field
#7 | Work Done in Rotating Dipole in Field & PE
#8 | Oscillations of Bar Magnet in Uniform Field
#9 | Solenoid as Equivalent Bar Magnet
Concept Questions :-
Bar magnet
Plane of coil is having angle $\theta$ with the magnetic field.
[As M=niA]
Difficulty Level:
• 33%
• 47%
• 14%
• 7%
Crack NEET with Online Course - Free Trial (Offer Valid Till September 22, 2019)
|
2019-09-20 14:04:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8210996985435486, "perplexity": 1839.4443984188254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574039.24/warc/CC-MAIN-20190920134548-20190920160548-00217.warc.gz"}
|
http://gmatclub.com/forum/cat-mgmat-vs-actual-gmat-scores-goal-149733.html
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 27 Jun 2016, 03:12
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# CAT MGMAT Vs ACTUAL GMAT scores GOAL 500.
Author Message
Intern
Joined: 16 Oct 2012
Posts: 2
GMAT 1: 310 Q V
Followers: 0
Kudos [?]: 0 [0], given: 3
CAT MGMAT Vs ACTUAL GMAT scores GOAL 500. [#permalink]
### Show Tags
24 Mar 2013, 11:09
Hi Folks!
I am coming to you today because the D-DAY is going to come pretty soon for me and I am really looking for advises.
I have to score a small 500 in order to reach my dream, some of you might laugh when reading this post , however it is a challenging situation for me as I started pretty much from scratch (310 on my first CAT). Since this first mock test I have improved my scores and my record history is 420 - 460 - 520 - 540 - 530. I took all the exams in real condition and did the AWA as well as the IR. Morevover, I took all these CATs on MGMAT. According to your experiences and expertise, do you believe that I should be able to break the 500 barrier or should I try to take more CATs durring my last preparation week ?
Manager
Status: Helping People Ace the GMAT
Joined: 16 Jan 2013
Posts: 184
Location: United States
Concentration: Finance, Entrepreneurship
GMAT 1: 770 Q50 V46
GPA: 3.1
WE: Consulting (Consulting)
Followers: 10
Kudos [?]: 46 [1] , given: 4
Re: CAT MGMAT Vs ACTUAL GMAT scores GOAL 500. [#permalink]
### Show Tags
25 Mar 2013, 07:44
1
KUDOS
It seems as though you are in good shape. Have you taken the 2 GMAT Prep exams? That should be your next step (with additional practice).
_________________
Want to Ace the GMAT with 1 button? Start Here:
GMAT Answers is an adaptive learning platform that will help you understand exactly what you need to do to get the score that you want.
Intern
Joined: 16 Oct 2012
Posts: 2
GMAT 1: 310 Q V
Followers: 0
Kudos [?]: 0 [0], given: 3
Re: CAT MGMAT Vs ACTUAL GMAT scores GOAL 500. [#permalink]
### Show Tags
25 Mar 2013, 09:04
Jim,
Thank you very much for your response! I hope your prognostic is going to be accurate on the test day!
I took one of the two GMAT prep test as a diagnostic test. However, I still have one, that I plan to do this week! Just as an additional info I already took the GMAT once and so I already have a good idea of how its going on during the test, which I think it's an advantage. The plan I have established for this week is to really focus on my log book error, take the GMAT prep test and workout my abilities to read and compute as fast as possible.
Thomas
Manager
Status: Helping People Ace the GMAT
Joined: 16 Jan 2013
Posts: 184
Location: United States
Concentration: Finance, Entrepreneurship
GMAT 1: 770 Q50 V46
GPA: 3.1
WE: Consulting (Consulting)
Followers: 10
Kudos [?]: 46 [1] , given: 4
Re: CAT MGMAT Vs ACTUAL GMAT scores GOAL 500. [#permalink]
### Show Tags
25 Mar 2013, 10:58
1
KUDOS
No problem at all. Sounds like you have a plan. Good luck!
_________________
Want to Ace the GMAT with 1 button? Start Here:
GMAT Answers is an adaptive learning platform that will help you understand exactly what you need to do to get the score that you want.
Re: CAT MGMAT Vs ACTUAL GMAT scores GOAL 500. [#permalink] 25 Mar 2013, 10:58
Similar topics Replies Last post
Similar
Topics:
MGMAT CATs - inflated score closer to actual GMAT score? 0 17 Feb 2014, 17:09
2 GMAT CAT vs KAPLAN CATs vs MGMAT CATs 5 04 Apr 2011, 01:37
MGMAT vs actual GMAT 8 28 Mar 2011, 10:56
3 MGMAT CAT vs Actual exam 14 21 Feb 2011, 11:32
Equivalence of MGMAT and actual GMAT scores 9 15 Oct 2006, 10:37
Display posts from previous: Sort by
# CAT MGMAT Vs ACTUAL GMAT scores GOAL 500.
Moderators: bagdbmba, WaterFlowsUp
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2016-06-27 10:12:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26238030195236206, "perplexity": 5685.203658805832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00195-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2474449/how-would-i-answer-the-following-question-about-the-determinant-of-a-matrix
|
How would I answer the following question about the determinant of a matrix?
If a $4 \times 4$ matrix $A$ with rows $v_1, v_2, v_3$ and $v_4$ has determinant $\det A = 9$, then $$\det \begin{pmatrix}v_1\\ 3v_2+4v_3\\ 9v_2+3v_3\\ v_4\end{pmatrix} = \ ?$$
What I did was multiply the initial determinant by $3$ and then $3$ again as that is what the rows were being multiplied by on their own. The addition of other rows does not seem to change the determinant according to the rules. However, my answer of $81$ was incorrect.
Any help would be highly appreciated!
Hint: If your original matrix is $A$, the new one is $\pmatrix{1 & 0 & 0 & 0\cr 0 & 3 & 4 & 0\cr 0 & 9 & 3 & 0\cr 0 & 0 & 0 & 1\cr} A$.
$\det \begin{bmatrix} v_{1} \\ 3v_{2}+4v_3 \\ 9v_2+3v_3 \\ v_4 \end{bmatrix}$ = $\det \begin{bmatrix} v_{1} \\ 3v_{2}+4v_3 \\ -9v_3 \\ v_4 \end{bmatrix}$
|
2019-02-18 10:51:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7940722107887268, "perplexity": 100.25756168733331}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484928.52/warc/CC-MAIN-20190218094308-20190218120308-00472.warc.gz"}
|
https://huggingface.co/GroNLP/hateBERT
|
# GroNLP /hateBERT
## Model description
HateBERT is an English pre-trained BERT model obtained by further training the English BERT base uncased model with more than 1 million posts from banned communites from Reddit. The model has been developed as a collaboration between the University of Groningen, the university of Turin, and the University of Passau.
For details, check out the paper presented at WOAH 2021. The code and the fine-tuned models are available on OSF.
### BibTeX entry and citation info
@inproceedings{caselli-etal-2021-hatebert,
\ttitle = "{H}ate{BERT}: Retraining {BERT} for Abusive Language Detection in {E}nglish",
\tauthor = "Caselli, Tommaso and
Basile, Valerio and
Mitrovi{\'c}, Jelena and
Granitzer, Michael",
\tbooktitle = "Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)",
\tmonth = aug,
\tyear = "2021",
\tpublisher = "Association for Computational Linguistics",
\tturl = "https://aclanthology.org/2021.woah-1.3",
\tdoi = "10.18653/v1/2021.woah-1.3",
\tpages = "17--25",
\tabstract = "We introduce HateBERT, a re-trained BERT model for abusive language detection in English. The model was trained on RAL-E, a large-scale dataset of Reddit comments in English from communities banned for being offensive, abusive, or hateful that we have curated and made available to the public. We present the results of a detailed comparison between a general pre-trained language model and the retrained version on three English datasets for offensive, abusive language and hate speech detection tasks. In all datasets, HateBERT outperforms the corresponding general BERT model. We also discuss a battery of experiments comparing the portability of the fine-tuned models across the datasets, suggesting that portability is affected by compatibility of the annotated phenomena.",
}
Mask token: [MASK]
|
2022-01-21 02:07:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18731439113616943, "perplexity": 6969.835621294343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302715.38/warc/CC-MAIN-20220121010736-20220121040736-00393.warc.gz"}
|
https://www.sparrho.com/item/fanout-line-structure-of-array-substrate-and-display-panel/d01755/
|
# Fanout line structure of array substrate and display panel
Imported: 12 Feb '17 | Published: 14 Jul '15
USPTO - Utility Patents
## Abstract
A fanout line structure of an array substrate includes a plurality of fanout lines arranged on a fanout area of the array substrate, where resistance value of the fanout line is dependent on length of the fanout line. Each of the fanout lines comprises a first conducting film. Resistance values of a first part of fanout lines are less than resistance values of a second part of the fanout lines, and the first part of fanout lines are covered by an additional conducting film. In the fanout lines covered by the additional conducting film, as the resistance value of the fanout line, increases, area of the additional conducting film covering the fanout line correspondingly decreases. An additional capacitor is generated between the additional conducting film and the first conducting film.
## Description
### TECHNICAL FIELD
The present disclosure relates to the field of a display device, and more particularly to a fanout line structure of an array substrate and a display panel.
### BACKGROUND
A display panel generally includes a liquid crystal (LC) panel and an organic light emitting diode (OLED) panel. A driving circuit cooperating with a backlight unit drives the LC panel to display image.
As shown in FIG. 1, a thin film transistor (TFT) array area 120 is arranged on an array substrate 100 of the LC panel, where signal lines and TFTs are arranged in the TFT array area 120. Bonding pad of a driving circuit board 130 is connected with the signal line of the array substrate through fanout lines 111, and the fanout lines are arranged on a fanout area.
The bonding pad is closely arranged on the driving circuit board 130, but the signal lines are dispersedly arranged in the TFT array area 120, namely distances between the bonding pad and different signal lines are different, which allows the fanout lines, connected between the bonding pad and the signal lines, to have different resistance values. A waveform of a signal changes because of different lengths and resistance values of the fanout lines, thereby affecting display quality of the LCD device. At present, a coiling is arranged in the fanout line to allow different lengths and resistance values of fanout lines to obtain even resistance values. As shown in FIG. 2, a bending section 112 is formed in the fanout line through the coiling, which increases length of the fanout line, and then increases the resistance value of the fanout line, thereby synchronizing the signals transferred by the fanout lines. The bending section 112 increases a height H of the fanout area, as a straight-line distance between two endpoints of the fanout line shortens, a length of a coiling arranged in the fanout line correspondingly becomes long. However, a gap space between the fanout lines is limited, thus, as the straight-line distance between two endpoints of the fanout line shortens, more bending sections are arranged to increase the length of the coiling of the fanout line (two bending sections are arranged in each of the fanout lines in FIG. 2), which increases the height H of the fanout area, thereby affecting a flame width of the LCD device, and further affecting design of narrow frame of the LCD device. Additionally, signal waveform distortion occurs not only in an impact of the resistance value of the fanout line, but parasitic capacitance is also an important impact factor. The FIG. 3 is a cross-section diagram of the fanout area of the LC panel, where a parasitic capacitor CLC is generated between a first conducting film 106 of the fanout line 111 of the array substrate 100 and an indium tin oxide (ITO) conducting film 201 of a color filter substrate 200, and between a second conducting film 104 of the fanout line 111 of the array substrate 100 and the ITO conducting film 201 of the color filter substrate 200, which causes the signal to delay. However, because lengths of the fanout lines are different, an overlapping area between the first conducting film and the ITO conducting film and an overlapping area between the second conducting film and the ITO conducting film, which correspond to different lengths of fanout lines, are correspondingly different. Thus, the parasitic capacitors are different, which causes different influences for the signal.
### SUMMARY
In view of the above-described problems, the aim of the present disclosure is to provide a display panel having a small height in a fanout area and a fanout line of an array substrate having a small height capable of obtaining good display quality and a narrow frame of a display device.
The purpose of the present disclosure is achieved by the following methods:
A fanout line structure of an array substrate comprises a plurality of fanout lines arranged on a fanout area of the array substrate, where resistance value of the fanout line is dependent on length of the fanout line. Each of the fanout lines comprises a first conducting film. Resistance values of a first part of fanout lines are less than resistance values of a second part of the fanout lines, and the first part of the fanout lines are covered by an additional conducting film. In the fanout lines covered by the additional conducting film, as the resistance value of the fanout line increases, area of the additional conducting film covering the fanout line correspondingly decreases. An additional capacitor is generated between the additional conducting film and the first conducting film.
Furthermore, a width of the additional conducting film of each of the fanout lines is a same as a width of an overlapping area between the first conducting film and the additional conducting film of each of the fanout lines, and lengths of the additional conducting films that cover the fanout lines having different resistance values are different. Because the width of the additional conducting film of each of the fanout lines is the a same as the width of the overlapping area between the first conducting film and the additional conducting film of each of the fanout lines, a length of the additional conducting film of each of the fanout lines is obtained according to a length of the fanout line, and a corresponding area of the additional conducting film covering the fanout line is further obtained, thereby obtaining a corresponding additional capacitor.
Furthermore, length of the additional conducting film that covers on the fanout line is L22:L22=∈r1d2 (L12−L22)/L2(d1r2−d2r1), where L1 is a length of one of the plurality of fanout lines, and a fanout line having a length L1 is regarded as a reference line. L2 is a length of the fanout line covered by the additional conducting film, L22 is the length of the additional. conducting film covering the fanout lines of having a length L2, ∈r1 is a relative dielectric constant of a liquid crystal layer of a liquid crystal panel, d1 is a thickness of the liquid crystal layer, ∈r2 is a relative dielectric constant of a dielectric medium between the additional conducting film and the first conducting film, and d2 is a thickness of an insulating layer between the additional conducting film and the first conducting turn.
Furthermore, the reference line is a longest fanout line of all of the fanout lines. The longest fanout line does not need be covered by the additional conducting film because the resistance value of the longest fanout line itself is greatest in all of the fanout lines, thus, area of the additional conducting film of the fanout line is calculated according to the reference line (namely the longest fanout line).
Furthermore, a dielectric medium between the additional conducting film and the first conducting film is a passivation layer, where the passivation layer has good insulating effect.
Furthermore, straight-line distances between two endpoints of some of the fanout lines are different, in the fanout lines having a short straight-line distance, at least one fanout line is configured with a bending section. A waveform of the signal is affected by resistance value R and the parasitic capacitor C of the fanout line, and a formula of the time constant τ of signal delay of the fanout line is: τ=RC. Namely, if the time constant τ of signal delay of each of the fanout line is needed to be same, the resistance value R and the parasitic capacitor C of the fanout line can be simultaneously adjusted, which may meet requirement of the process, design, and production.
Furthermore, the first conducting film is a metal conducting film, Where the metal conducting film has good conducting effect, which reduces signal delay.
Furthermore, the additional conducting film is an indium tin oxide conducting film, which is directly added in a process of manufacturing the array substrate without any other process.
Furthermore, the fanout line further comprises a second conducting film arranged under the first conducting film. Stability of the fanout line having two conducting films is good.
A display panel comprises any one of the above-mentioned fanout lines.
In the present disclosure, resistance values of the first part of fanout lines of the array substrate are less than resistance values of the second part of the fanout lines of the array substrate, and the first part of the fanout lines are covered by an additional conducting film. In the fanout lines covered by the additional conducting film, as the resistance value of the fanout line increases, area of the additional conducting film covering the fanout line correspondingly decreases. An additional capacitor is generated between the additional conducting film and the first conducting film. A resistor-capacitor (RC) delay is caused by the parasitic capacitor to a signal transferred by the fanout line. Thus, the fanout line having a relatively small resistance value may delay transferring the signal through the additional capacitor, which allows the signal transferred by the fanout line having the small resistance value to synchronize with the signal transferred by the fanout line having a great resistance value, where the fanout line having the great resistance value itself has longer delay time than the fanout line having the small resistance value.
### DETAILED DESCRIPTION
The present disclosure will further be described in detail in accordance with the figures and the exemplary examples.
### EXAMPLE 1
As shown in FIG. 4 and FIG. 5, and in reference to FIG. 1, a first example provides a liquid crystal (LC) panel comprising an array substrate 100 and a color filter substrate 200, where the color filter substrate 200 comprises a first glass substrate 203, a black matrix 202, and an indium tin oxide (ITO) conducting film 201. An area comprising a plurality of fanout lines of the array substrate 100 is regarded as a fanout area 110 (as shown in FIG. 1), and the plurality of fanout lines 111 are arranged on a second glass substrate 105, where lengths of the plurality of fanout lines are different, and resistance values of the plurality of fanout lines are correspondingly different. Each of the fanout lines 111 at least comprises a first conducting film 106. In all of the fanout lines 111, resistance values of a first part of fanout lines are less than resistance values of a second part of fanout lines, the first part of fanout lines having a relatively small resistance value are labeled as 111x, the fanout lines 111x are covered by an additional conducting film 101 (as shown in FIG. 5). In the fanout lines 111x, a first insulating layer 102 (passivation layer, PAV) is a dielectric medium, and is arranged between the additional conducting film 101 and the first conducting film 106. As the resistance value of the fanout line 111x increases, area of the additional conducting film covering the fanout line 111x correspondingly decreases (as shown in FIG. 4). The additional conducting film 101 is able to be connected with a common electrode, a ground terminal, and other electrode. An additional capacitor Cx is generated between the additional conducting film 101 and the first conducting film 106, and the additional capacitor Cx is used to reduce impedance difference between the fanout lines.
The additional capacitor Cx is a parasitic capacitor. A resistor-capacitor (RC) delay is caused by the parasitic capacitor to a signal transferred by the fanout line 111. Thus, the fanout line having a relatively small resistance value may delay transferring the signal through the additional capacitor Cx, which allows the signal transferred by the fanout line having the small resistance value to synchronize with the signal transferred by the fanout line having a great resistance value, where the fanout line having the great resistance value itself has a longer delay time than the fanout line having the small resistance value. A formula for calculating a time constant τ of signal delay of the fanout line is: τ=RC, where R is the resistance value of the fanout line, and C is the capacitance value of the fanout line. Namely, the signal delay time of the fanout line depends on the resistance value of the fanout line and the capacitance value of the fanout line.
An optimized structure of the first example as follows: the first insulating layer 102 employs the passivation layer (PAV) haying good insulating effect, the first conducting film 106 is a metal conducting film having good conducting effect and small signal delay. The additional conducting film 101 is the indium tin oxide film (ITO), which is directly added in a process of manufacturing the array substrate.
The fanout lines 111 further comprises a second conducting film 104 arranged under the first conducting film 106, and the second conducting film 104 is the metal conducting film. A second insulating layer 103 (gate insulating layer, GI) is arranged between the first conducting film 106 and the second conducting film 104. The fanout line has two conducting films, which improves stability of the fanout line. It should be understood that the fanout line may be configured with three conducting films or more conducting films. The fanout line successively comprises the second conducting film 104 arranged on a bottom layer of the array substrate, the second insulating layer 103 arranged on the second conducting film 104, the first conducting film 106 arranged on the second insulating layer 103, and the first insulating layer 102 arranged on the first conducting film 106. The first insulating layer 102 of some of the fanout lines 111 are covered by the additional conducting film 101. As the resistance value of the fanout line increases, area of the additional conducting film covering the fanout line 111x correspondingly decreases, where the length of the fanout line and the resistance value of the fanout line are directly proportional.
The length of each of the fanout lines is different, thus, the resistance value of the each of the fanout lines is correspondingly different. In order to synchronize the signal, the additional capacitor Cx of each of the fanout lines is also different. The additional capacitor Cx of each of the fanout lines is relative to an overlapping area between the additional conducting film 101 and the first conducting film 106. As shown in FIG. 3 and FIG. 5, the parasitic capacitor CLC is generated between the fanout line without the additional conducting film 101 and the ITO conducting film 201 of the color filter substrate 200, and the parasitic capacitor CLC is generated by the first conducting film 106 of the fanout line 111, the second conducting film 104 of the fanout line 111, and the ITO conducting film 201 of the color filter substrate 200. Capacitance value of the parasitic capacitor CLC is far less than capacitance value of the additional capacitor CX because of a great thickness of a liquid crystal layer, thus, the parasitic capacitor CLC causes small RC delay. However, in order to improve accuracy of delay calculation, the parasitic capacitor should be considered. A formula for calculating the parasitic capacitor between the fanout line 111 and the ITO conducting film 201 of the color filter substrate 200 is:
$C = ɛ 0 · ɛ r · S d = ɛ 0 · ɛ r · L · W d$
Where ∈0 is an absolute dielectric constant, ∈r is a relative dielectric constant of the liquid crystal layer, L is the length of the fanout line. W is a width of the fanout line, and d is the thickness of the liquid crystal layer, where d is generally in a range of 3-4 μm.
The present disclosure will further be described in detail in accordance with calculating the area of the additional conducting film that covers the fanout line.
In order to simplify calculations, the width of the additional conducting film of each of the fanout lines 111 is a same as a width of the overlapping area (effective area of generating the capacitor) between the first conducting film 106 and the additional conducting film of each of the fanout lines 111. Lengths of the additional conducting films covering the fanout lines having different resistance values are different. If length of the fanout line is short, the fanout line needs to be covered by a long additional conducting film. In the first example, the width of the additional conducting film is the same as the width of the fanout line, thus, the width of the additional conducting film is the same as the width of the overlapping area between the first conducting film 106 and the additional conducting film, which simplifies manufacturing process and operation.
In order to determine an overlay length of the additional capacitor of the fanout line, one fanout line is chosen from all of the fanout lines as a reference line. As shown in FIG. 4, the first example chooses a longest fanout line 111a from all of the fanout lines as the reference line. The additional capacitor of the fanout line 111b covered by the additional conducting film is calculated by choosing the longest fanout line 111a as the reference line, namely the length of the additional conducting film that covers on the fanout line 111b is calculated. If a resistance value of the longest fanout line 111a is R1 and a resistance value of the fanout line 111b covered by the additional conducting film is R2, formulas for calculating the R1 and R2 are:
$R 1 = R s · L 1 W , and R 2 = R s · L 2 W$
For the longest fanout line 111a without any additional conducting film, as shown in FIG. 3 and FIG. 5, the parasitic capacitor C1(namely CLC) of the longest fanout line 111a is generated between the first conducting film 106 of the fanout line 111 and the ITO conducting film 201 of the color filter substrate 200, and between the second conducting film 104 of the fanout line 111 and the ITO conducting film 201 of the color filter substrate 200, where capacitance value of the parasitic capacitor C1 is:
$C 1 = ɛ 0 · ɛ r 1 · L 1 · W d 1$
where ∈0 is an absolute dielectric constant, ∈r1 is a relative dielectric constant of the liquid crystal layer of the LC panel, L1 is a length of the fanout line 111a, W is a width of the fanout line 111a, and d1 is the thickness of the liquid crystal layer. L1 is calculated by a fanout tool (the fanout tool is a special tool for a designer). For the longest fanout line 111a without any additional conducting film, a formula for calculating time constant τ1 of the fanout line 111a is:
$τ 1 = R 1 · C 1 = R s · ɛ 0 · ɛ r 1 · L 1 2 d 1$
The time constant τ1 and square of L1 are directly proportional, a following result is obtained: the fanout line of two sides of entire fanout area is the longest, and the time constant τ1 of the longest fanout line is greatest. The time constant τ1 is regarded as a reference in the formula.
For the fanout line 111b, the length of the fanout line 111b without the ITO conducting film 101 is regarded as L21, and the length of the fanout line 111b having the ITO conducting film 101 is regarded as L22, relationship of the L21 and the L22 is:
L2=L21+L22
• Capacitance value of the parasitic capacitor C21 of the fanout line 111b without the ITO conducting film 101 is:
$C 21 = ɛ 0 · ɛ r 1 · L 21 · W d 1$
• Capacitance value of the additional capacitor C22 of the fanout line 111b having the ITO conducting film 101 is:
$C 22 = ɛ 0 · ɛ r 2 · L 22 · W d 2$
In the formula, ∈r2 is a relative dielectric constant of the first insulating layer 102 (namely passivation layer), and d2 is the thickness of the first insulating layer 102. The relative dielectric constant of the passivation layer is close to the relative dielectric constant of the LC molecular layer. However, the thickness of the passivation layer is small, thus, in condition of same area, capacitance value of a new additional capacitor Cx is far greater than capacitance value of the parasitic capacitor CLC generated between the fanout line and the ITO conducting film of the color filter substrate. As shown in FIG. 5, in the first example, the capacitance value of the additional capacitor Cx is about ten times the capacitance value of the parasitic capacitor CLC. The capacitor C21 is connected with the capacitor C22 in parallel. Capacitance value the capacitor C2 of the entire fanout line 111b is:
C2=C21+C22
when adjusting impedance of the fanout line, the time constant of the fanout line 111a is regarded as the reference:
τ2=R2·C21
thus, formulas obtained according to the above-mentioned equation as follow:
$L 21 = ɛ r 1 d 2 L 1 2 - ɛ r 2 d 1 L 2 2 L 2 ( d 2 ɛ r 1 - d 1 ɛ r 2 )$ $L 22 = ɛ r 1 d 2 ( L 1 2 - L 2 2 ) L 2 ( d 1 ɛ r 2 - d 2 ɛ r 1 )$
where L22 is the length of the additional conducting film 101 that covers the fanout line 111b. Thus, area S of the ITO conducting film that covers the fanout line 111b is: S=WL22.
An optimized structure of the first example is shown in FIG. 4. Straight-line distances between two endpoints of some of the fanout lines are different. Take the longest fanout line 111a and the fanout line 111b for example, the straight-line distance between two endpoints of the fanout line 111b is shorter than the straight-line distance between two endpoints of the longest fanout line 111a. Thus, in the first example, the fanout line 111b is configured with a bending section 112 to increase the length of the fanout line 111b. A waveform of the signal is affected by the resistance value R and the parasitic capacitor C of the fanout line, and a formula of the time constant τ of signal delay of the fanout line is: τ=RC. Namely, if the time constant τ of signal delay of each of the fanout lines is needed to be same, the resistance value R and the parasitic capacitor C of the fanout line can be simultaneously adjusted, which may meet requirement of the process, design, and production. In the fist example, the additional capacitor is generated through covering the fanout line with the additional conducting film on a basic of arranging the coiling in the fanout line, which reduces length of the coiling of the fanout line. If the resistance value and the time constant of the fanout line are far less than the resistance value and the time constant of the longest fanout line, the fanout line may be arranged the coiling and covered with the additional conducting film, which is suitable for a large size of the LCD television, thereby avoiding great height H of the fanout area because of more coilings and obtaining signal synchronization.
### EXAMPLE 2
As shown in FIG. 6 and FIG. 7, a difference between the first example and a second example as follows: the width of the additional conducting film is less than the width of the fanout line in the second example. In a condition that the width of the additional conducting film is less than the width of the fanout line, it should be considered that different width of the additional conducting film can be chosen according to a size of the panel and requirement of the process.
### EXAMPLE 3
FIG. 8 is a schematic diagram of a third example. The width of the additional conducting film 101 is greater than the width of the fanout line 111, which allows the width of the overlapping area between the additional conducting film 101 and the first conducting film is a same as the width of the first conducting film, thereby improving accuracy of calculating the additional capacitor.
### EXAMPLE 4
As shown in FIG. 9, a fourth example is difference from the above-mentioned examples, in the fourth example, a block of additional conducting film 101 covers a plurality of fanout lines to obtain the additional capacitor, which simplifies manufacturing process without any complicated covering film, thereby reducing cost.
### EXAMPLE 5
As shown in FIG. 10, a difference between the first example and a fifth example as follows: besides the additional conducting film, only one conducting film is arranged in the fanout line. Namely, the fanout line 111 only is configured with the first conducting film 106, where the first conducting film 106 is the metal conducting film. The insulating medium of the additional capacitor Cx is the first insulating film 102, where the first insulating film 102 is the passivation layer PAV, which simplifies manufacturing process. However, stability of the fifth example is lower than the first example.
### EXAMPLE 6
As shown in FIG. 11, in a sixth example, one conducting film is arranged in the fanout line. A difference between the fifth example and the sixth example as follows: the insulating medium of the additional capacitor Cx is the first insulating film 102 and the second insulating film 102, namely the passivation layer and the gate insulating film.
### EXAMPLE 7
As shown in FIG. 12, in a seventh example, one conducting film is arranged in the fanout line. A difference between the seventh example and the above-mentioned examples as follows: the additional conducting film 101 is the metal conducting film, the first insulating film 102 is arranged on the additional conducting film 101, which protects the additional conducting film 101. The insulating medium between the first conducting film 106 and the additional conducting film 101 is the second insulating film 103 (namely the gate insulating film).
The present disclosure is described in detail in accordance with the above contents with the specific exemplary examples. However, this present disclosure is not limited to the specific examples. For the ordinary technical personnel of the technical field of the present disclosure, on the premise of keeping the conception of the present disclosure, the technical personnel can also make simple deductions or replacements, and all of which should be considered to belong to the protection scope of the present disclosure.
## Claims
1. A fanout line structure of an array substrate, comprising:
a plurality of fallout lines arranged on a fanout area of the array substrate;
wherein resistance value of the fanout line is dependent on length of the fanout line;
each of the fanout lines comprises a first conducting film; resistance values of a first part of fanout lines are less than resistance values of a second part of the fanout lines, and the first part of the fanout lines are covered by an additional conducting film; in the fanout lines covered by the additional conducting film, as the resistance value of the fanout line increases, area of the additional conducting film covering the fanout line correspondingly decreases; an additional capacitor is generated between the additional conducting film and the first conducting film, wherein a width of the additional conducting film of each of the fanout lines is a same as a width of an overlapping area between the first conducting film and the additional conducting film of each of the fanout lines, and lengths of the additional conducting films that cover the fanout lines having different resistance values are different, wherein the length of the additional conducting film that covers the fanout line is L22:
L22=∈r1d2(L12−L22)/L2(d1r2−d2r1);
wherein L1 is a length of one of the plurality of fanout lines, and a fanout line having a length L1 is regarded as a reference line: L2 is a length of the fanout line covered by the additional conducting film, L22 is the length of the additional conducting film covering the fanout line having a length L2, ∈r1 is a relative dielectric constant of a liquid crystal layer of a liquid crystal panel, d1 is thickness of the liquid crystal layer, ∈r2 is a relative dielectric constant of a dielectric medium between the additional conducting film and the first conducting film, and d2 is a thickness of the dielectric medium between the additional conducting film and the first conducting film.
a plurality of fallout lines arranged on a fanout area of the array substrate;
wherein resistance value of the fanout line is dependent on length of the fanout line;
each of the fanout lines comprises a first conducting film; resistance values of a first part of fanout lines are less than resistance values of a second part of the fanout lines, and the first part of the fanout lines are covered by an additional conducting film; in the fanout lines covered by the additional conducting film, as the resistance value of the fanout line increases, area of the additional conducting film covering the fanout line correspondingly decreases; an additional capacitor is generated between the additional conducting film and the first conducting film, wherein a width of the additional conducting film of each of the fanout lines is a same as a width of an overlapping area between the first conducting film and the additional conducting film of each of the fanout lines, and lengths of the additional conducting films that cover the fanout lines having different resistance values are different, wherein the length of the additional conducting film that covers the fanout line is L22:
L22=∈r1d2(L12−L22)/L2(d1r2−d2r1);
wherein L1 is a length of one of the plurality of fanout lines, and a fanout line having a length L1 is regarded as a reference line: L2 is a length of the fanout line covered by the additional conducting film, L22 is the length of the additional conducting film covering the fanout line having a length L2, ∈r1 is a relative dielectric constant of a liquid crystal layer of a liquid crystal panel, d1 is thickness of the liquid crystal layer, ∈r2 is a relative dielectric constant of a dielectric medium between the additional conducting film and the first conducting film, and d2 is a thickness of the dielectric medium between the additional conducting film and the first conducting film.
L22=∈r1d2(L12−L22)/L2(d1r2−d2r1);
2. The fanout line structure of the array substrate of claim 1, wherein the reference line is a longest fanout line of all of the fanout lines.
3. The fanout line structure of the array substrate of claim 1, wherein a dielectric medium between the additional conducting film and the first conducting film is a passivation layer.
4. The fanout line structure of the array substrate of claim 1, wherein straight-line distances between two endpoints of some of the fanout lines are different; in the fallout lines haying a short straight-line distance, at least one fanout line is configured with a bending section.
5. The fanout line structure of the array substrate of claim 1, wherein the first conducting film is a metal conducting film.
6. The fanout line structure of the array substrate of claim 1, wherein the additional conducting film is an indium tin oxide conducting film or a metal conducting film.
7. The fanout line structure of the array substrate of claim 1, further comprising a second conducting film arranged under the first conducting film.
8. The fanout line structure of the array substrate of claim 1, wherein a block of additional conducting film covers the plurality of fanout lines.
9. A display panel, comprising:
a plurality of fallout lines arranged on a fanout area of the array substrate;
wherein resistance value of the fanout line is dependent on length of the fanout line;
each of the fanout lines comprises a first conducting film; resistance values of a first part of fanout lines are less than resistance values of a second part of the fanout lines, and the first part of the fanout lines are covered by an additional conducting film; in the fanout lines covered by the additional conducting film, as the resistance value of the fanout line increases, area of the additional conducting film covering the fanout line correspondingly decreases; an additional capacitor is generated between the additional conducting film and the first conducting film, wherein a width of the additional conducting film of each of the fanout lines is a same as a width of an overlapping area between the first conducting film and the additional conducting film of each of the fanout and lengths of the additional conducting film that cover the fanout lines having different resistance values are different, wherein the length of the additional conducting film that covers the fanout lines is L22:
L22 =∈r1d2(L12−L22)/L2(d1r2−d2r1);
wherein is L1 length of one of the p1urality of the fanout lines, and a fanout line having a length L1 is regarded as a reference line; L2 is a length of the fanout line covered by the additional conducting film, L22 is the length of the additional conducting film covering the fanout line having a length L2, ∈r1 is a relative dielectric constant of a liquid crystal layer of a liquid crystal panel, d1 is thickness of the liquid crystal layer, ∈r2 is a relative dielectric constant of a dielectric medium between the additional conducting film and the first conducting film, and d2 is a thickness of the dielectric medium between the additional conducting film and the first conducting film.
a plurality of fallout lines arranged on a fanout area of the array substrate;
wherein resistance value of the fanout line is dependent on length of the fanout line;
each of the fanout lines comprises a first conducting film; resistance values of a first part of fanout lines are less than resistance values of a second part of the fanout lines, and the first part of the fanout lines are covered by an additional conducting film; in the fanout lines covered by the additional conducting film, as the resistance value of the fanout line increases, area of the additional conducting film covering the fanout line correspondingly decreases; an additional capacitor is generated between the additional conducting film and the first conducting film, wherein a width of the additional conducting film of each of the fanout lines is a same as a width of an overlapping area between the first conducting film and the additional conducting film of each of the fanout and lengths of the additional conducting film that cover the fanout lines having different resistance values are different, wherein the length of the additional conducting film that covers the fanout lines is L22:
L22 =∈r1d2(L12−L22)/L2(d1r2−d2r1);
wherein is L1 length of one of the p1urality of the fanout lines, and a fanout line having a length L1 is regarded as a reference line; L2 is a length of the fanout line covered by the additional conducting film, L22 is the length of the additional conducting film covering the fanout line having a length L2, ∈r1 is a relative dielectric constant of a liquid crystal layer of a liquid crystal panel, d1 is thickness of the liquid crystal layer, ∈r2 is a relative dielectric constant of a dielectric medium between the additional conducting film and the first conducting film, and d2 is a thickness of the dielectric medium between the additional conducting film and the first conducting film.
L22 =∈r1d2(L12−L22)/L2(d1r2−d2r1);
10. The display panel of claim 9, wherein the reference line is a longest fanout line of all of the fanout lines.
11. The display panel of claim 9, wherein a dielectric medium between the additional conducting film and the first conducting film is a passivation layer.
12. The display panel of claim 9, wherein straight-line distances between two endpoints of some of the fallout lines are different; in the fallout lines having a short straight-line distance, at least one fallout line is configured with a bending section.
13. The display panel of claim 9, wherein the first conducting film is a metal conducting film.
14. The display panel of claim 9, wherein the additional conducting film is an indium tin oxide conducting film of a metal conducting film.
15. The display panel of claim 9, wherein a block of additional conducting film covers the plurality of the fanout lines.
|
2020-11-27 02:46:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25069066882133484, "perplexity": 2005.9139946955756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141189038.24/warc/CC-MAIN-20201127015426-20201127045426-00352.warc.gz"}
|
https://www.mathdoubts.com/reciprocal-identities/
|
# Reciprocal identities
The reciprocal relation of a trigonometric function with another trigonometric function is called reciprocal identity.
## Introduction
Every trigonometric function has a reciprocal relation with another trigonometric function. So, the six trigonometric ratios form six reciprocal trigonometric identities and they are used as formulas in trigonometric mathematics.
### Sine and Cosecant functions
Sin function is a reciprocal function of cosecant and cosecant function is also a reciprocal of sine function.
$(1)\,\,\,\,\,\,$ $\sin{\theta} = \dfrac{1}{\csc{\theta}}$
$(2)\,\,\,\,\,\,$ $\csc{\theta} = \dfrac{1}{\sin{\theta}}$
### Cosine and Secant functions
Cos function is a reciprocal function of secant and secant function is also a reciprocal of cosine function.
$(3)\,\,\,\,\,\,$ $\cos{\theta} = \dfrac{1}{\sec{\theta}}$
$(4)\,\,\,\,\,\,$ $\sec{\theta} = \dfrac{1}{\cos{\theta}}$
### Tangent and Cotangent functions
Tan function is a reciprocal function of cotangent and cot function is also a reciprocal of tangent function.
$(5)\,\,\,\,\,\,$ $\tan{\theta} = \dfrac{1}{\cot{\theta}}$
$(6)\,\,\,\,\,\,$ $\cot{\theta} = \dfrac{1}{\tan{\theta}}$
Latest Math Topics
Email subscription
Math Doubts is a best place to learn mathematics and from basics to advanced scientific level for students, teachers and researchers. Know more
|
2020-09-19 01:57:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.971839964389801, "perplexity": 1955.6257094090554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189928.2/warc/CC-MAIN-20200919013135-20200919043135-00534.warc.gz"}
|
http://mathoverflow.net/questions/1176/given-a-sequence-defined-on-the-positive-integers-how-should-it-be-extended-to/1180
|
# Given a sequence defined on the positive integers, how should it be extended to be defined at zero?
This question is inspired by a lecture Bjorn Poonen gave at MIT last year. I have ideas of my own, but I'm interested in what other people have to say, so I'll make this community wiki and post my own thoughts later. Here are some examples of what I'm talking about:
• Why does a^0 = 1?
• Why does 0! = 1?
• If the Fibonacci number Fn+1 counts the number of ways to tile a board of length n with tiles of length 1 and 2, why does F1 = 1?
• What is the determinant of a 0x0 matrix?
• What is the degree of the zero polynomial?
• What is the direct product of zero groups?
• What is the zeroth homotopy group of a space?
I want to be very precise about exactly what I'm asking for here.
Question 1: What general principles do you apply in a situation like this? Can they be stated as theorems, or do they only exist at the level of intuition?
Question 2: Do you know of any examples where there are two different ways to extend a sequence to zero, both of which are reasonable from the perspective of some principle?
Feel free to answer at any level of sophistication.
-
My own thought tend to revolve around some subset of the following:
--Find a combinatorial definition for the sequence, and see if it makes sense when you extend slightly further.
--If you are trying to perform a vacuous task (e.g. tiling an empty board, or counting functions defined on the empty set), you can do it in exactly one way. Most of your examples fall under this class, including a^0 (functions defined on the empty set), 0! (bijections on the empty set), F_1 (tiling an empty board), and the cardinality of the direct product of no groups (choosing one object from each class, so the direct product should be the identity).
--An empty sum is equal to 0, an empty product is equal to 1. (again the cardinality of the direct product of 0 groups should be 1).
What about the determinant of a 0x0 matrix? Well, it's a sum over all permutations from a 0 element to itself of an empty product. There's one element in the sum (vacuous task), and its an empty product, so should be 1.
I don't really know if there's a rigorous statement of this, or if there's not some way it can come into self-contradiction if there's two combinatorial ways of defining a sequence, but it's what seems natural to go by.
-
Poonen claimed, and I agree, that the determinant of a 0x0 matrix should be equal to 1. Consider what happens when you try to expand the determinant of a 1x1 matrix by minors. – Qiaochu Yuan Oct 19 '09 at 7:38
Yes, I was just rethinking that one as well...see the re-explanation above. – Kevin P. Costello Oct 19 '09 at 7:41
Given your examples, you don't seem to be asking for a canonical way to extend arbitrary functions defined on positive integers to zero. Instead, you're taking functions whose inputs are sets and asking if they can be defined when some input is the empty set. As long as your sequence defined on positive integers comes equipped with this extra structure, you shouldn't have too much trouble extending it naturally. If you start with an unstructured sequence, the reasons for favoring one extension over another become rather weak (e.g., Kolmogorov complexity).
Here's the standard example of a sequence that extends to zero in different ways: the sequence that is identically zero on the positive integers. One extension is the zero function. Other extensions interpret the sequence as n -> k 0n for some nonzero k.
Incidentally, you need to choose a base point on your space to define pi0. Once you have that, it is the set of homotopy classes of pointed maps from S0 to your space. Equivalently, it is the (pointed) set of path components. It does not have a natural group structure (although it may if your space comes with some kind of composition law).
-
The determinant of an endomorphism f of a free R-module of dimension n (R commutative) is the $d \in R$ such that $\bigwedge^n f$ is the homothety of ratio d. Our case corresponds to $n=0$, and $\bigwedge^0 f$ is the identity of R, so d=1.
The reasons, already given, why 0^0=1 (m^n is the number of functions from a set of cardinality n to a set of cardinality m) and 0!=1 (n! is the number of bijections of a set of cardinality n), are illustrations of Baez's ideas on counting as decategorification.
-
For the first three, you can define a recurrence. Run the recurrences backward.
Also, 0! = Γ(1) = int_0^\infty e^(-t) = 1 ; here there's nothing special about 0. (But Γ isn't defined for nonpositive integers.)
-
By considering a^0 and 0^b, it seems reasonable to me to define 0^0 to be 0 or 1 depending on what you're up to. Of course you could argue that you just shouldn't define 0^0 for this reason.
This might be considered cheating as an answer to question 2 though because I'm really extending a map for N^2 to (0,0) in two different ways.
-
I would argue as follows. If you're a combinatorialist who accepts that 0! = 1, you accept that there is one bijection from the empty set to itself, so you accept that there is one function from the empty set to itself, so you should accept that 0^0 = 1. – Qiaochu Yuan Oct 19 '09 at 7:12
When is it actually useful, in practice, to set 0^0 not equal to 1? – Alex Fink Oct 19 '09 at 21:37
This may sound lame, but I'd say you just look at the properties of the sequence you care about, and if you can define it so those properties still hold (exponent rules, recursion, universal properties...), then you do. At least I can't imagine there being a more general answer than this.
Regarding 0^0, I'd say 0^0=1 works better "algebraically", since then you can still write 0^0=0^(-0)=1/(0^0), and 0^0=0^(0+0)=(0^0)*(0^0).
-
This may also sound lame, but how do you know you're looking at the right properties? – Qiaochu Yuan Oct 19 '09 at 7:18
I have a utilitarian view on definitions: they're meant to shorten arguments. So whichever properties allow you to shorten your arguments are the "right" ones. This obviously depends on the kind of math you're doing, and how you've been doing it, but I don't think this dependency is meant to be avoided. – Andrew Critch Oct 19 '09 at 14:48
Fair enough. I do like that you mentioned universal properties, since my own response to this question is basically "categorify until it becomes obvious what to do." For example, the product of zero things in a category is a terminal object and the coproduct of zero things is an initial object. – Qiaochu Yuan Oct 19 '09 at 16:48
|
2014-04-19 05:14:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7638905048370361, "perplexity": 350.29677328080635}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/3043595/discrete-probability-expected-value-and-random-variable-independence/3043730
|
# Discrete Probability: Expected Value and Random Variable independence [closed]
For this, I took n=2 which makes the set: {1,2,3,4}
Set will contain: {C1,C2,B1,B2}
X = 1 if the position of first cider bottle is 1
P(X=1) = 6/24 = 1/4
E(X) = 2 * 1/4 = 1/2
The general form will be: n*1/2n = 1/n.
This is my attempt, I'm not sure if I'm correct on this.
For this question:
You roll a fair die repeatedly and independently until the result is an even number. Defi ne the random variables X = the number of times you roll the die and Y = the result of the last roll. For example, if the results of the rolls are 5; 1; 3; 3; 5; 2, then X = 6 and Y = 2. Prove that the random variables X and Y are independent.
I defined X = 1 if number of times roll die is 1 time
and Y =1 if result of last roll is even
So, Pr(X) = 3/6 = 1/2 = Pr(Y)
Pr(X and Y) = 1/2
This gives me 1/2 = 1/4 which is not independent but the question is asking to prove independence
## closed as unclear what you're asking by Did, NCh, Leucippus, Tianlalu, KReiserDec 19 '18 at 7:10
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
• Please don't ask multiple unrelated questions in one post. – Bungo Dec 17 '18 at 6:22
$$\{X=1\}$$ is the event that the first bottle is a cider bottle.
Probability on that: $$P(\text{first cider})=\frac{n}{n+2}$$
$$\{X=2\}$$ is the event that the first bottle contains beer and the second bottle contains cider.
Probability on that: $$P(\text{first beer})P(\text{second cider}\mid\text{ first beer})=\frac2{n+2}\frac{n}{n+1}$$.
$$\{X=3\}$$ is the event that the first bottle contains beer and the second bottle contains beer.
Probability on that: $$P(\text{first beer})P(\text{second beer}\mid\text{ first beer})=\frac2{n+2}\frac{1}{n+1}$$.
Now we are ready to find:$$\mathbb EX=P(X=1)+2P(X=2)+3P(X=3)=\frac{n}{n+2}+2\frac2{n+2}\frac{n}{n+1}+3\frac2{n+2}\frac1{n+1}=\frac{n+3}{n+1}$$
There are $$2$$ bottles that have index $$1$$ so that $$P(Y=1)=\frac2{n+2}$$.
$$\{X=1,Y=1\}$$ is the event that the first bottle is the cider bottle with index $$1$$.
Probability on that: $$P(X=1,Y=1)=\frac1{n+2}$$.
So a necessary condition for independence is: $$\frac{n}{n+2}\frac2{n+2}=P(X=1)P(Y=1)=P(X=1,Y=1)=\frac1{n+2}$$
leading to $$n=2$$.
So we conclude that there is no independence if $$n>3$$ and there might be independence if $$n=2$$. To verify we must check for that case whether $$P(X=i)(Y=j)=P(X=i,Y=j)$$ for $$i,j\in\{1,2\}$$.
I leave that to you.
For the first part of the first problem, $$X$$ can take only values $$1,2,$$ and $$3$$.
For $$X = 1$$
$$C_i ---------------$$ in the first place and the rest can be filled with $$2B_i$$s and $$(n-1) C_i$$s.
For X =2
$$B_i C_i ---------------$$ one of the $$B_i$$s in the first place, $$C_i$$ in the second place and the rest can be filled with the remaining $$B_i$$s and $$(n-1) C_i$$s
For X = 3
$$B_iB_i -----------------$$ Both the $$B_i$$s should occupy the first two places and the rest can be filled with the remaining $$C_i$$s.
Number of ways X = 1 can happen is = $${n\choose1} (n+1)!$$
Number of ways X = 2 can happen is = $${2\choose1}{n\choose1} n!$$
Number of ways X = 3 can happen is = $${2\choose1} n!$$
Total number of ways =$$(n+2)!$$
Sanity check to see if $$P(X=1)+P(X=2)+P(X=3) = 1$$
$$\frac{(n(n+1)! + 2nn! + 2n!)}{(n+2)!} = 1$$
Thus the expected value $$E(X) = \frac{1.{n\choose1}(n+1)!+2.{2\choose1}{n\choose1} n!+3.{2\choose1} n!}{(n+2)!} = \frac{n+3}{n+1}$$
Second Part
For $$Y = 1$$
$$(B_1) ---------------$$ $$B_1$$in the first place and the rest can be filled with the other $$B_2$$s and $$(n)C_i$$s to total of (n+1)! ways
$$(C_1) ---------------$$ $$C_1$$in the first place and the rest can be filled with the other $$B_i$$s and $$(n-1)C_i$$s to a total of (n+1)! ways.
Thus $$P(Y=1) = \frac{(2(n+1)!)}{(n+2)!}$$.
For Y =2
$$-(B_1) ---------------$$ The first place can be occupied with $$(C_2-C_n)$$ and $$B_2$$ and $$B_1$$in the second place and the rest can be filled with the other $$B_2$$s and $$(n)C_i$$s to total of $${n\choose1}n!$$ ways
$$-(C_1) ---------------$$ The first place can be occupied with $$(C_2-C_n)$$ and $$B_2$$ and $$C_1$$in the second place and the rest can be filled with the other $$B_2$$s and $$(n)C_i$$s to total of $${n\choose1}n!$$ ways
Thus$$P(Y=2) = \frac{(2n) n!}{(n+2)!}$$
and so on for (Y=n+1) for which the probability = $$P(Y=n+1) = \frac{(2) n!}{(n+2)!}$$
Thus $$E(Y) = \frac{2(n+1).n! \times 1 + 2n.n!\times 2 + 2(n-1)n!\times 3 +\cdots + 2(2).n!\times n+ 2(1).n!\times (n+1)}{(n+2)!}$$ $$= \frac{(n+3)}{3}$$
|
2019-09-23 11:08:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 69, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6645007133483887, "perplexity": 174.991597825948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576355.92/warc/CC-MAIN-20190923105314-20190923131314-00170.warc.gz"}
|
https://courses.lumenlearning.com/ivytech-collegealgebra/chapter/graph-polynomial-functions/
|
Graph polynomial functions
We can use what we have learned about multiplicities, end behavior, and turning points to sketch graphs of polynomial functions. Let us put this all together and look at the steps required to graph polynomial functions.
How To: Given a polynomial function, sketch the graph.
1. Find the intercepts.
2. Check for symmetry. If the function is an even function, its graph is symmetrical about the y-axis, that is, f(–x) = f(x).
If a function is an odd function, its graph is symmetrical about the origin, that is, f(–x) = f(x).
3. Use the multiplicities of the zeros to determine the behavior of the polynomial at the x-intercepts.
4. Determine the end behavior by examining the leading term.
5. Use the end behavior and the behavior at the intercepts to sketch a graph.
6. Ensure that the number of turning points does not exceed one less than the degree of the polynomial.
7. Optionally, use technology to check the graph.
Example 8: Sketching the Graph of a Polynomial Function
Sketch a graph of $f\left(x\right)=-2{\left(x+3\right)}^{2}\left(x - 5\right)$.
Solution
This graph has two x-intercepts. At = –3, the factor is squared, indicating a multiplicity of 2. The graph will bounce at this x-intercept. At = 5, the function has a multiplicity of one, indicating the graph will cross through the axis at this intercept.
The y-intercept is found by evaluating f(0).
$\begin{cases}\hfill \\ f\left(0\right)=-2{\left(0+3\right)}^{2}\left(0 - 5\right)\hfill \\ \text{ }=-2\cdot 9\cdot \left(-5\right)\hfill \\ \text{ }=90\hfill \end{cases}$
The y-intercept is (0, 90).
Figure 13
Additionally, we can see the leading term, if this polynomial were multiplied out, would be $-2{x}^{3}$,
so the end behavior is that of a vertically reflected cubic, with the outputs decreasing as the inputs approach infinity, and the outputs increasing as the inputs approach negative infinity.
To sketch this, we consider that:
• As $x\to -\infty$ the function $f\left(x\right)\to \infty$, so we know the graph starts in the second quadrant and is decreasing toward the x-axis.
• Since $f\left(-x\right)=-2{\left(-x+3\right)}^{2}\left(-x - 5\right)$
is not equal to f(x), the graph does not display symmetry.
• At $\left(-3,0\right)$, the graph bounces off of the x-axis, so the function must start increasing.
At (0, 90), the graph crosses the y-axis at the y-intercept.
Figure 15
Somewhere after this point, the graph must turn back down or start decreasing toward the horizontal axis because the graph passes through the next intercept at (5, 0).
As $x\to \infty$ the function $f\left(x\right)\to \mathrm{-\infty }$, so we know the graph continues to decrease, and we can stop drawing the graph in the fourth quadrant.
Using technology, we can create the graph for the polynomial function, shown in Figure 16, and verify that the resulting graph looks like our sketch in Figure 15.
Try It 3
Sketch a graph of $f\left(x\right)=\frac{1}{4}x{\left(x - 1\right)}^{4}{\left(x+3\right)}^{3}$.
Solution
|
2019-09-18 20:38:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5474618077278137, "perplexity": 505.23069255833224}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573331.86/warc/CC-MAIN-20190918193432-20190918215432-00528.warc.gz"}
|
https://www.gradesaver.com/textbooks/science/physics/university-physics-with-modern-physics-14th-edition/chapter-44-particle-physics-and-cosmology-problems-discussion-questions-page-1519/q44-2
|
University Physics with Modern Physics (14th Edition)
It is possible to momentarily create particle-antiparticle pairs of energy $\Delta E$ for a short time $\Delta t$ as long as Heisenberg’s uncertainty principle for matter is obeyed. Empty space is not truly empty.
|
2019-11-20 14:02:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1863218992948532, "perplexity": 325.11467606780946}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670559.66/warc/CC-MAIN-20191120134617-20191120162617-00111.warc.gz"}
|
https://www.ora.ox.ac.uk/objects/uuid:555390ea-7ab4-4fa1-92e0-9baf3bd0fa8d
|
Journal article
### Asymptotic frequentist coverage properties of Bayesian credible sets for sieve priors
Abstract:
We investigate the frequentist coverage properties of (certain) Bayesian credible sets in a general, adaptive, nonparametric framework. It is well known that the construction of adaptive and honest confidence sets is not possible in general. To overcome this problem (in context of sieve type of priors), we introduce an extra assumption on the functional parameters, the so-called “general polished tail” condition. We then show that under standard assumptions, both the hierarchical and empirica...
Publication status:
Published
Peer review status:
Peer reviewed
### Access Document
Files:
• (Supplementary materials, 441.5KB)
• (Version of record, 311.6KB)
Publisher copy:
10.1214/19-AOS1881
### Authors
More by this author
Institution:
University of Oxford
Division:
MPLS
Department:
Statistics
Oxford college:
Jesus College
Role:
Author
ORCID:
0000-0002-0998-6174
Publisher:
Institute of Mathematical Statistics Publisher's website
Journal:
Annals of Statistics Journal website
Volume:
48
Issue:
4
Pages:
2155-2179
Publication date:
2020-08-14
Acceptance date:
2019-06-09
DOI:
ISSN:
0090-5364
Source identifiers:
1023033
Language:
English
Keywords:
Pubs id:
pubs:1023033
UUID:
uuid:555390ea-7ab4-4fa1-92e0-9baf3bd0fa8d
Local pid:
pubs:1023033
Deposit date:
2019-06-26
|
2022-08-09 06:04:42
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8206684589385986, "perplexity": 7536.2403562857835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570901.18/warc/CC-MAIN-20220809033952-20220809063952-00012.warc.gz"}
|
http://wires.wiley.com/WileyCDA/WiresArticle/wisId-WIDM1120.html
|
This Title All WIREs
How to cite this WIREs title:
WIREs Data Mining Knowl Discov
Multilinear and nonlinear generalizations of partial least squares: an overview of recent advances
Can't access this content? Tell your librarian.
Partial least squares (PLS) is an efficient multivariate statistical regression technique that has shown to be particularly useful for analysis of highly collinear data. To predict response variables Y based independent variables X, PLS attempts to find a set of common orthogonal latent variables by projecting both X and Y onto a new subspace respectively. As an increasing interest in multi‐way analysis, the extension to multilinear regression model is also developed with the aim to analyzing two‐multidimensional tensor data. In this article, we overview the PLS‐related methods including linear, multilinear, and nonlinear variants and discuss the strength of the algorithms. As canonical correlation analysis (CCA) is another similar technique with the aim to extract the most correlated latent components between two datasets, we also briefly discuss the extension of CCA to tensor space. Finally, several examples are given to compare these methods with respect to the regression and classification techniques.
• Technologies > Machine Learning
• Technologies > Prediction
The PLS model: data decomposition as a sum of rank‐one matrices.
[ Normal View | Magnified View ]
Visualization of test dataset in two‐dimensional kernel‐based tensor canonical correlation analysis (KTCCA) latent space. Observe that the first two components obtained from KTCCA are discriminative for action classification.
[ Normal View | Magnified View ]
Three examples of video sequences in tensor form for H‐W, H‐C, and walking actions.
[ Normal View | Magnified View ]
The prediction performance for three‐dimensional (3D) movement trajectories recorded from Elbow, Wrist, and Hand using four regression models including linear partial least squares (LP), higher‐order partial least squares (HP), kernel‐based tensor partial least squares (KTPLS) with Chordal distance based kernel (KT‐1) and KTPLS with KL divergence‐based kernel (KT‐2). The correlation coefficients r2 between prediction and real data shown in (a) indicates that the best performance is obtained by TK‐1, while evaluation of $Q2=1−‖y^−y‖2/‖y‖2$ showed in (b) indicates that TK‐2 outperforms the other methods.
[ Normal View | Magnified View ]
Visualization of higher‐order partial least squares (HOPLS) model for $X_$ decomposition. (a) Spatial loadings $Pr1$ corresponding to the first five latent vectors. Each row shows five significant loading vectors. Likewise, panel (b) depicts time‐frequency loadings $Pr2$, with β and γ‐band exhibiting significant contribution.
[ Normal View | Magnified View ]
The scheme for decoding of three‐dimensional (3D) hand movement trajectories from electrocorticography (ECoG) signals.
[ Normal View | Magnified View ]
Schematic diagram of the higher‐order partial least squares (HOPLS) model: approximating $X_$ as a sum of rank‐(1,L2,L3) tensors. Approximation for $Y_$ follows a similar principle with shared common latent components T.
[ Normal View | Magnified View ]
The N‐way partial least squares (N‐PLS) model: data decomposition as a sum of rank‐one tensors and a sum of rank‐one matrices.
[ Normal View | Magnified View ]
Browse by Topic
Technologies > Machine Learning
Technologies > Prediction
|
2018-10-16 07:27:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2936957776546478, "perplexity": 4544.317474358809}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510415.29/warc/CC-MAIN-20181016072114-20181016093614-00310.warc.gz"}
|
https://www.physicsforums.com/threads/rlc-circuit-problem.395083/
|
# RLC Circuit problem
1. Apr 13, 2010
### nucleawasta
1. The problem statement, all variables and given/known data
In an L-R-C series circuit, the rms voltage across the resistor is 32.0 , across the capacitor it is 90.1 and across the inductor it is 51.5 .
What is the rms voltage of the source?
2. Relevant equations
Well there are lots of equations V_rms =V/sqrt(2)
V_R=IR
V_L=IX_L(I$$\omega$$L)
V_C=IX_c(I/($$\omega$$C)
3. The attempt at a solution
So what i did was convert all the values from rms into their amplitude voltages and summed them. Then took the resultant rms of that answer. This is not correct however.
I was using kirchoffs rules of the sum of voltages around loop =0. Does this principle hold true in this situation?
Last edited: Apr 13, 2010
2. Apr 13, 2010
### GRB 080319B
For an AC circuit, the voltage across the source is equal to the vector sum of the voltages of the components. Using the phasor diagram, you can graphically determine the direction of the source voltage, if it leads the current or not, and how to find the vector sum of the voltages. Rms and peak amplitude voltage are proportional by sqrt{2}, so you don't need to convert to peak and then back to rms, just leave everything in rms and the source voltage will be in rms. The source voltage is related to the other voltages like this:
Vs = sqrt{ Vr^2 + (VL - Vc)^2 }
3. Apr 13, 2010
### nucleawasta
Thank you so much! I've ben stuck on this for so long.
|
2018-03-17 19:09:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6132681369781494, "perplexity": 1023.7120154346762}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645280.4/warc/CC-MAIN-20180317174935-20180317194935-00684.warc.gz"}
|
https://meangreenmath.com/category/algebra-ii/
|
# Finding the Regression Line without Calculus
Last month, my latest professional article, Deriving the Regression Line with Algebra, was published in the April 2017 issue of Mathematics Teacher (Vol. 110, Issue 8, pages 594-598). Although linear regression is commonly taught in high school algebra, the usual derivation of the regression line requires multidimensional calculus. Accordingly, algebra students are typically taught the keystrokes for finding the line of best fit on a graphing calculator with little conceptual understanding of how the line can be found.
In my article, I present an alternative way that talented Algebra II students (or, in principle, Algebra I students) can derive the line of best fit for themselves using only techniques that they already know (in particular, without calculus).
For copyright reasons, I’m not allowed to provide the full text of my article here, though subscribers to Mathematics Teacher should be able to read the article by clicking the above link. (I imagine that my article can also be obtained via inter-library loan from a local library.) That said, I am allowed to share a macro-enabled Microsoft Excel spreadsheet that I wrote that allows students to experimentally discover the line of best fit:
http://www.math.unt.edu/~johnq/ExploringTheLineofBestFit.xlsm
I created this spreadsheet so that students can explore (which is, after all, the first E of the 5-E model) the properties of the line of best fit. In this spreadsheet, students can enter a data set with up to 10 points and then experiment with different slopes and $y$-intercepts. As they experiment, the spreadsheet keeps track of the current sum of the squares of the residuals as well as the best guess attempted so far. After some experimentation, the spreadsheet can also provide the correct answer so that students can see how close they got to the right answer.
# My Favorite One-Liners: Part 90
In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.
Here’s a typical problem that arises in Algebra II or Precalculus:
Find all solutions of $2 x^4 + 3 x^3 - 7 x^2 - 35 x -75 =0$.
There is a formula for solving such quartic equations, but it’s very long and nasty and hence is not typically taught in high school. Instead, the one trick that’s typically taught is the Rational Root Test: if there’s a rational root of the above equation, then (when written in lowest terms) the numerator must be a factor of $-10$ (the constant term), while the denominator must be a factor of $2$ (the leading coefficient). So, using the rational root test, we conclude
Possible rational roots = $\displaystyle \frac{\pm 1, \pm 3, \pm 5, \pm 15, \pm 25, \pm 75}{\pm 1, \pm 2}$
$= \pm 1, \pm 3, \pm 5, \pm 15, \pm 25, \pm 75 \displaystyle \pm \frac{1}{2}, \pm \frac{3}{2}, \pm \frac{5}{2}, \pm \frac{15}{2}, \pm \frac{25}{2}, \pm \frac{75}{2}$.
Before blindly using synthetic division to see if any of these actually work, I’ll try to address a few possible misconceptions that students might have. One misconception is that there’s some kind of guarantee that one of these possible rational roots will actually work. Here’s another: students might think that we haven’t made much progress toward finding the solutions… after all, we might have to try synthetic division 24 times before finding a rational root. So, to convince my students that we actually have made real progress toward finding the answer, I’ll tell them:
Yes, 24 is a lot\dots but it’s better than infinity.
# My Favorite One-Liners: Part 86
In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.
To get students comfortable with $i = \sqrt{-1}$, I’ll often work through a quick exercise on the powers of $i$:
$i^1 = i$
$i^2 = -1$
$i^3 = -i$
$i^4 = 1$
$i^5 = i$
Students quickly see that the powers of $i$ are a cycle of length 4, so that $i^5 = i \cdot i \cdot i \cdot i \cdot i$ is the same thing as just $i$. So I tell my students:
There’s a technical term for this phenomenon: aye-yai-yai-yai-yai.
# My Favorite One-Liners: Part 85
In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.
Today’s one-liner is one that I’ll use when I want to discourage students from using a logically correct and laboriously cumbersome method. For example:
Find a polynomial $q(x)$ and a constant $r$ so that $x^3 - 6x^2 + 11x + 6 = (x-1)q(x) + r$.
Hypothetically, this can be done by long division:
However, this takes a lot of time and space, and there are ample opportunities to make a careless mistake along the way (particularly when subtracting negative numbers). Since there’s an alternative method that could be used (we’re dividing by something of the form $x-c$ or $x+c$, I’ll tell my students:
Yes, you could use long division. You could also stick thumbtacks in your eyes; I don’t recommend it.
Instead, when possible, I guide students toward the quicker method of synthetic division:
# My Favorite One-Liners: Part 82
In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.
In differential equations, we teach our students that to solve a homogeneous differential equation with constant coefficients, such as
$y'''+y''+3y'-5y = 0$,
the first step is to construct the characteristic equation
$r^3 + r^2 + 3r - 5 = 0$
by essentially replacing $y'$ with $r$, $y''$ with $r^2$, and so on. Standard techniques from Algebra II/Precalculus, like the rational root test and synthetic division, are then used to find the roots of this polynomial; in this case, the roots are $r=1$ and $r = -1\pm 2i$. Therefore, switching back to the realm of differential equations, the general solution of the differential equation is
$y(t) = c_1 e^{t} + c_2 e^{-t} \cos 2t + c_3 e^{-t} \sin 2t$.
As $t \to \infty$, this general solution blows up (unless, by some miracle, $c_1 = 0$). The last two terms decay to 0, but the first term dominates.
The moral of the story is: if any of the roots have a positive real part, then the solution will blow up to $\infty$ or $-\infty$. On the other hand, if all of the roots have a negative real part, then the solution will decay to 0 as $t \to \infty$.
This sets up the following awful math pun, which I first saw in the book Absolute Zero Gravity:
An Aeroflot plan en route to Warsaw ran into heavy turbulence and was in danger of crashing. In desparation, the pilot got on the intercom and asked, “Would everyone with a Polish passport please move to the left side of the aircraft.” The passengers changed seats, and the turbulence ended. Why? The pilot achieved stability by putting all the Poles in the left half-plane.
# My Favorite One-Liners: Part 81
In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.
Here’s a problem that hypothetically could appear in Algebra II or Precalculus:
Find the solutions of $x^4 + 2x^3 + 10 x^2 - 6x + 65 = 0$.
While there is a formula for solving quartic equations, it’s extremely long and hence is not typically taught to high school students. Instead, the techniques that are typically taught are the Rational Root Test and (sometimes, depending on the textbook) Descartes’ Rule of Signs. The Rational Root Test constructs a list of possible rational roots (in this case $\pm 1, \pm 5, \pm 13, \pm 65$) to test… usually with synthetic division to accomplish this as quickly as possible.
The only problem is that there’s no guarantee that any of these possible rational roots will actually work. Indeed, for this particular example, none of them work because all of the solutions are complex ($1 \pm 2i$ and $2 \pm 3i$). So the Rational Root Test is of no help for this example — and students have to somehow try to find the complex roots.
So here’s the wisecrack that I use. This wisecrack really only works in Texas and other states in which the state legislature has seen the wisdom of allowing anyone to bring a handgun to class:
What do you do if a problem like this appears on the test? [Murmurs and various suggestions]
Shoot the professor. [Nervous laughter]
It’s OK; campus carry is now in effect. [Full-throated laughter.]
# My Favorite One-Liners: Part 80
In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.
Today’s awful pun comes courtesy of Math With Bad Drawings. Suppose we need to solve for $x$ in the following equation:
$2^{2x+1} = 3^{x}$.
Naturally, the first step is taking the logarithm of both sides. But with which base? There are two reasonable options for most handheld scientific calculators: base-10 and base-$e$. So I’ll tell the class my preference:
I’m organic; I only use natural logs.
# My Favorite One-Liners: Part 68
In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.
When discussing the Laws of Logarithms, I’ll make a big deal of the fact that one law converts a multiplication problem into a simpler addition problem, while another law converts exponentiation into a simpler multiplication problem.
After a few practice problems — and about 3 minutes before the end of class — I’ll inform my class that I’m about to tell the world’s worst math joke. Here it is:
After the flood, the ark landed, and Noah and the animals got out. And God said to Noah, “Go forth, be fruitful, and multiply.” So they disembarked.
Some time later, Noah went walking around and saw the two dogs with their baby puppies and the two cats with their baby kittens. However, he also came across two unhappy, frustrated, and disgruntled snakes. The snakes said to Noah, “We’re having some problems here; would you mind knocking down a tree for us?”
Noah says, “OK,” knocks down a tree, and goes off to continue his inspections.
Some time later, Noah returns, and sure enough, the two snakes are surrounding by baby snakes. Noah asked, “What happened?”
The snakes replied, “Well, you see, we’re adders. We need logs to multiply.”
After the laughter and groans subside, I then dismiss my class for the day:
Go forth, and multiply (pointing to the door of the classroom). For most of you, don’t be fruitful yet, but multiply. You’re dismissed.
# My Favorite One-Liners: Part 54
In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.
The complex plane is typically used to visually represent complex numbers. (There’s also the Riemann sphere, but I won’t go into that here.) The complex plane looks just like an ordinary Cartesian plane, except the “$x-$axis” becomes the real axis and the “$y-$axis” becomes the imaginary axis. It makes sense that this visualization has two dimensions since there are two independent components of complex numbers. For real numbers, only a one-dimensional visualization is needed: the number line that (hopefully) has been hammered into my students’ brains ever since elementary school.
While I’m on the topic, it’s unfortunate that “complex numbers” are called complex, as this often has the connotation of difficult. However, that’s not why our ancestors chose the word complex was chosen. Even today, there is a second meaning of the word: a group of associated buildings in close proximity to each other is often called an “apartment complex” or an “office complex.” This is the real meaning of “complex numbers,” since the real and imaginary parts are joined to make a new number.
When I teach my students about complex number, I tell the following true story of when my daughter was just a baby, and I was extremely sleep-deprived and extremely desperate for ways to get her to sleep at night.
I tried counting monotonously, moving my finger to the right on a number line with each number:
$1, 2, 3, 4, ...$
That didn’t work, so I tried counting monotonously again, but this time moving my finger to the left on a number line with each number:
$-1, -2, -3, -4, ...$
That didn’t work either, so I tried counting monotonously once more, this time moving my finger up the imaginary axis:
$i, 2i, 3i, 4i...$
For the record, that didn’t work either. But it gave a great story to tell my students.
# My Favorite One-Liners: Part 49
In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them. Today’s post is certainly not a one-liner but instead is my pseudohistory for how the roots of polynomials were found.
When I teach Algebra II or Precalculus (or train my future high school teachers to teach these subjects), we eventually land on the Rational Root Test and Descartes’ Rule of Signs as an aid for finding the roots of cubic equations or higher. Before I get too deep into this subject, however, I like to give a 10-15 minute pseudohistory about the discovery of how polynomial equations can be solved. Historians of mathematics will certain take issue with some of this “history.” However, the main purpose of the story is not complete accuracy but engaging students with the history of mathematics. I think the story I tell engages students while remaining reasonably accurate… and I always refer students to various resources if they want to get the real history.
To begin, I write down the easiest two equations to solve (in all cases, $a \ne 0$:
$ax + b = 0 \qquad$ and $\qquad ax^2 + bx + c = 0$
These are pretty easy to solve, with solutions well known to students:
$x = -\displaystyle \frac{b}{a} \qquad$ and $\qquad x = \displaystyle \frac{-b \pm \sqrt{b^2-4ac}}{2a}$
In other words, there are formulas that you can just stick in the coefficients and get the answer out without thinking too hard. Sure, there are alternate ways of solving for $x$ that could be easier, like factoring, but the worst-case scenario is just plugging into the formula.
These formulas were known to Babylonian mathematicians around 2000 B.C. (When I teach this in class, I write the date, and all other dates and discoverers, next to the equations for dramatic pedagogical effect.) Though not written in these modern terms, basically every ancient culture on the globe that did mathematics had some version of these formulas: for example, the ancient Egyptians, Greeks, Chinese, and Mayans.
Naturally, this leads to a simple question: is there a formula for the cubic:
$ax^3 + bx^2 + cx + d = 0$
Is there some formula that we can just plug $a$, $b$, $c$, and $d$ to just get the answer? The answer is, Yes, there is a formula. But it’s nasty. The formula was not discovered until 1535 A.D., and it was discovered by a man named Tartaglia. During the 1500s, the study of mathematics was less about the dispassionate pursuit of truth and more about exercising machismo. One mathematician would challenge another: “Here’s my cubic equation; I bet you can’t solve it. Nyah-nyah-nyah-nyah-nyah.” Then the second mathematician would solve it and challenge the first: “Here’s my cubic equation; I bet you can’t solve it. Nyah-nyah-nyah-nyah-nyah.” And so on. Well, Tartaglia came up with a formula that would solve every cubic equation. By plugging in $a$, $b$, $c$, and $d$, you get the answer out.
Tartaglia’s discovery was arguably the first triumph of the European Renaissance. The solution of the cubic was perhaps the first thing known to European mathematicians in the Middle Ages that was unknown to the ancient Greeks.
In 1535, Tartaglia was a relatively unknown mathematician, and so he told a more famous mathematician, Cardano, about his formula. Cardano told Tartaglia, why yes, that is very interesting, and then published the formula under his own name, taking credit without mention of Tartaglia. To this day, the formula is called Cardano’s formula.
So there is a formula. But it would take an entire chalkboard to write down the formula. That’s why we typically don’t make students learn this formula in high school; it’s out there, but it’s simply too complicated to expect students to memorize and use.
$ax^4 + bx^3 + cx^2 + dx + e = 0$
The solution of the quartic was discovered less than five years later by an Italian mathematician named Ferrari. Ferrari found out that there is a formula that you can just plug in $a$, $b$, $c$, $d$, and $e$, turn the crank, and get the answers out. Writing out this formula would take two chalkboards. So there is a formula, but it’s also very, very complicated.
Of course, Ferrari had some famous descendants in the automotive industry.
So now we move onto my favorite equation, the quintic. (If you don’t understand why it’s my favorite, think about my last name.)
$ax^5 + bx^4 + cx^3 + dx^2 + ex + f = 0$
After solving the cubic and quartic in rapid succession, surely there should also be a formula for the quintic. So they tried, and they tried, and they tried, and they got nowhere fast. Finally, the problem was solved nearly 300 years later, in 1832 (for the sake telling a good story, I don’t mention Abel) by a French kid named Evariste Galois. Galois showed that there is no formula. That takes some real moxie. There is no formula. No matter how hard you try, you will not find a formula that can work for every quintic. Sure, there are some quintics that can be solved, like $x^5 = 0$. But there is no formula that will work for every single quintic.
Galois made this discovery when he was 19 years old… in other words, approximately the same age as my students. In fact, we know when wrote down his discovery, because it happened the night before he died. You see, he was living in France in 1832. What was going on in France in 1832? I ask my class, have they seen Les Miserables?
France was torn upside-down in 1832 in the aftermath of the French Revolution, and young Galois got into a heated argument with someone over politics; Galois was a republican, while the other guy was a royalist. More importantly, both men were competing for the hand of the same young woman. So they decided to settle their differences like honorable Frenchmen, with a duel. So Galois wrote up his mathematical notes one night, and the next day, he fought the duel, he lost the duel, and he died.
Thus giving complete and total proof that tremendous mathematical genius does not prevent somebody from being a complete idiot.
For the present, there are formulas for cubic and quartic equations, but they’re long and impractical. And for quintic equations and higher, there is no formula. So that’s why we teach these indirect methods like the Rational Root Test and Descartes’ Rule of Signs, as they give tools to use to guess at the roots of higher-order polynomials without using something like the quadratic formula.
Real references:
|
2017-05-26 16:57:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 70, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.771198034286499, "perplexity": 711.0594757730164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608669.50/warc/CC-MAIN-20170526163521-20170526183521-00383.warc.gz"}
|
http://lambda-the-ultimate.org/node/1590
|
## R6RS Status Report
New status report on the R6RS effort (also available as PDF).
Previous LtU discussion (March 2006).
## Comment viewing options
### Fantastic work
Watching R6RS take shape is just fantastic. A careful and deliberate approach that reasons from first principles is very satisfying. Kudos!
Not to say that I didn't almost faint when I saw the "cons only works for lists" proposal, but you can't agree with everyone on everything.
Cheers,
Chris Dean
### Improper and mutable lists
My dynamically typed language Kogut has immutable lists and the second argument of cons is restricted to be a list. I would be pleasantly amused if Scheme was brought here too (but I don't believe that this would happen, tradition is too strong). It's perfectly compatible with dynamic typing.
The advantage of this approach is eliminating numerous dilemmas about which lists may or must be shared or copied, and how a particular operation should behave when given an improper list, and when particular errors are detected.
For example the Kogut operation corresponding to apply in Scheme has clear error conditions. There is no temptation to say “well, an improper list here should be an error, but we don't always detect this because it would increase the asymptotic cost for no other gainâ€.
The behavior of possible sharing of the argument list between the caller and the callee becomes irrelevant (technically it's still observable, but without mutation it doesn't matter in practice). There is no question about what should happen when the list bound to a “rest†parameter is mutated, or whether unquote-splicing as the last element of a list must copy the list it splices, or whether append is allowed to let the result share cons cells with an argument other than the last one if the last argument is an empty list.
Pairs make a perfect sense as a separate type from non-empty lists.
### "quoth the raven, 404"
It seems to have moved to a new location.
Fixed.
### And, of course, so has the
And, of course, so has the PDF.
### weak pointers
This is the sort of material I like, since I (mostly) get Scheme and want my infrastructure to align with expansion into a full Scheme eventually, once a small kernel is grown enough. I doubt I'll ever have time to add something that appeared in R5RS, though, with other things on my plate. But the thing I'd skip first is practically designed to be added later as a layer. (Yes, I'm not naming the feature; it's a game for readers.)
Okay, weak pointers. I'd have thought weak pointers were a gimme. There must be something complex to argue about then, or a really irritating effect to achieve during garbage collection. Which is it? I like to design in weak pointer support first as part of the gc planning. Is this because I'm implementing in C++? Is it hard to consider implementing weak pointers in a language that never had them in the stardard?
Hmm, now that I think about it more, I can see weak pointers would be irritating if you intended to have a very complex incremental garbage collector -- one you were barely smart enough to debug, say -- or you wanted to let your friends plan to implement their own very complex gc schemes. It would be one more hoop to jump through, reducing your options a lot, and might imply you finish scanning the entire arena of memory that might have a strong reference to the object a weak pointer denotes. Folks who don't want to do this would strongly resist the feature of weak pointers, right?
ctdean: Not to say that I didn't almost faint when I saw the "cons only works for lists" proposal, but you can't agree with everyone on everything.
Yeah, that was weird. The 'only works for lists' has a strong static as opposed to dynamic flavor to it, that I'd expect appeals to folks who want to impose ideal prescriptive rules. Is the motivation entirely optimization? So code needn't perform the check of non-nil list termination? But, but ... then you'd also not perform checks for circular lists either, which is even more expensive. Oh, my bad, you can't make circular lists either using a cons requiring a nil terminated list (and assuming no pair mutation).
I don't like the prescriptive route much, unless another type is introduced with the constrained behavior. Limiting older system semantics in the name of optimization kinda sucks, because it feels like a slippery slope. It leads to more and more things for your own good, as decided by language gods.
[Edit: I forgot to add the riff I'd planned on weakness in prescriptive approaches, using Unicode as an example. I was handed a string library where the architect had prescribed 'there will be no embedded nuls' because the Unicode standard said strings won't contain null. Then I had to clean up the mess: what do you do when the embedded null actually appears? Go into denial? Better to have a separate descriptive approach that can represent anything possible, then layer on judgments about adherence to a prescriptive rule.]
### Weak pointers and finalizers
I agree that there are quite fundamental and should be provided by practical languages. Unfortunately there are at least two difficulties associated with them:
• Finalizers are started asynchronously and should run in a separate thread. See Hans Boehm's paper Destructors, Finalizers, and Synchronization. This means that any language which provides finalizers should also provide threads.
• It's not clear whether a finalizer should prevent objects it refers to from being finalized. Well, it is clear for me: it should. Otherwise the presence of finalizers leaks from abstractions: it's unsafe to use a finalizable object from another finalizer. But it's not clear for everybody, e.g. it doesn't prevent such object from being finalized in Glasgow Haskell.
The semantics I prefer here has an unpleasant consequence: a finalizer may not refer to the object it is registered for, since it would never be finalized (a memory leak). Which sometimes forces to split an object into parts, so the finalizer can refer to only some fields of it, and this disallows some potential compiler optimizations which would remove indirections.
### weak pointers uncoupled from finalizers
I read Boehm's Destructors, Finalizers, and Synchronization just now, at least twice through parts that seemed to apply, thinking about it carefully since I could not find a definite interaction of finalizers and weak pointers. I think they are similar in usage context but mostly independent of each other in effect.
Thank you, that was a good read. I especially liked section 3 on the finalizer model used by Cedar's gc (and Boehm's own C++ gc), because it showed way to partially order objects to be finalized, which I hadn't thought about, so finalizers that used other objects to be finalized will go first.
I want to extend Boehm's terminology with another data structure named W: the set of objects containing weak pointers. (You might use W to refer to before or after gc images of this data, but you get roughly the same results either way.) Boehm refers to a Smalltalk weak array, for example, and these would be in the set W containing weak pointers.
So F is data for finalization and W is data containing weak pointers. Both of them require special handling in gc following the intial steps dealing with ordinary data. But they need not interfere with one another. Processing F would go first since objects referenced only by finalizers might add that last strong reference causing something to get copied. Then W would be processed to nil out weak pointers that never got copied from a strong reference.
Qrczak: I agree that [weak pointers and finalizers] are quite fundamental and should be provided by practical languages. Unfortunately there are at least two difficulties associated with them:
After careful thought, I believe weak pointers are not affected by finalizers; you can add both features to a language and have code for each almost totally ignore each other, except for handling them in the right order. So Boehm's paper doesn't offer a reason why adding weak pointers to a language might later run into trouble when adding more features. (But since this might not be really obvious, I can see why discussion might cause weak pointers to be tabled.)
To make matters confusing, Boehm notes a gc implementation might support notification after gc that weakly referenced objects had been collected, and this vaguely resembles finalization in nature, though no finalization is involved. However, notification is in no way required for collected weak pointers; that's just an optimization for certain uses. To perform the notification, you'd probably try to run notifiers in separate threads after gc, the same way you'd run finalizers; but that's just correct logic for running post gc async code -- it has nothing to do with weak pointers per se.
Anyway, I'm glad you brought this up, even if I'm taking the position the two features aren't as related as one might suppose.
[Edit: if running a finalizer could make an unreachable object reachable, some folks might want this to stop a weak pointer from being nil'd out of the weak collection. But that would just make things complex; you might as well have the finalizer put the object right back into the collection involved. There's no need to have gc guarantee such results.]
### On weak references and finalization
This paper deals with these issues and define a semantics for finalization wrt weak references.
### prefer solutions in gc layer
Daniel Yokomizo: This paper deals with these issues and define a semantics for finalization wrt weak references.
That 1999 Jones/Marlow/Elliott paper only describes what happens under Haskell when you don't give yourself freedom to change the garbage collector, then try to use Haskell as it exists already to implement finalizers wrt weak references in memo tables.
The problem they describe with "key in value" in weak hash tables doesn't exist when gc primitives are just slightly more powerful than the Haskell assumed in the paper. The problem is caused only when you have weak pointers to keys but strong pointers to values with strong pointers from values to the keys. Nothing stops you from using making the values weak as well, which totally solves the problem. And I suppose if the values had weak refs to the keys, the problem would also go away.
The third paragraph of their intro starts with this sentence: One "solution" is to build in memo functions as a primitive of the language implementation, with special magic in the garbage collector and elsewhere to deal with these questions.
Scare quotes here imply they didn't care to fix the problem in the gc layer. :-) But it's the gc layer I'm addressing in my earlier remarks. So this paper isn't really applicable if you have enough variety in the weak pointer support (tables with weak keys and values both, or in whatever mixture you want). The "key in value" problem doesn't exist if the value is also on the far side of a weak pointer.
### Weak pointers coupled with finalizers
Coupling weak pointers with finalizers is convenient because a weak pointer can serve as a handle to control the finalizer (trigger it manually or cancel it), and because typical uses of weak pointers are associated with finalizers (to remove the weak pointer itself from some data structure). This is what GHC does.
A weak reference registers a triple <K, V, finalizer>. A weak reference can be active or inactive; it is created active. A weak reference can be asked for a value; if it's active, it returns V, otherwise it tells so.
There are implicit strong references to all weak references, and from a weak reference to its finalizer. There are no implicit strong references to K and V.
The garbage collector computes the smallest set L satisfying these conditions:
1. GC roots are in L.
2. For each object from L, objects it holds with strong references are in L.
3. For each active weak reference, if its K is in L, then its V is in L too.
After L is computed, weak references whose Ks are not in L become inactive: they are removed from the global set of weak references, and their finalizers are scheduled for finalization. Other objects which are not in L are freed.
Often V is the same as K. For an entry in a weak dictionary, K is the key, and V is the pair consisting of the key and the value.
The design of Glasgow Haskell differs from this slightly:
A weak reference doesn't keep its finalizer with a strong reference. The third condition for L changes thus:
1. For each active weak reference, if its K is in L, then its V and its finalizer are in L too.
After finalizers are scheduled for finalization, these finalizers are added to L, together with other objects implied by condition 2 (in particular this may resurrect some K objects). The remaining objects not in L are freed.
Java is like Glasgow Haskell, except that K and V are always the same object, and that there are only tree kinds of finalizers: doing notihing, calling K.finalize(), or adding the weak reference to a ReferenceQueue. .NET is similar.
If there is a cycle among K objects and finalizers associated with them, Glasgow Haskell runs all these finalizers in an unspecified order, and Kogut doesn't run these finalizers at all (memory leak).
### partially ordered finalizable objects
Qrkzak: It's not clear whether a finalizer should prevent objects it refers to from being finalized. Well, it is clear for me: it should.
Sounds good to me too. I bought that part of Boehm's section 3, that being referenced by some other finalizer ought to grant an object a reprieve from current collection, since this would ensure you'd run finalizers in a good partial order (objects needed by no one before objects needed by some other object's finalizer).
### Overview Paper
Rys, it seems to me that you have made a lot of research on implementing GCs, reading your comments here in LtU. Would you consider writing an overview/tutorial paper, or a literate program heavy on the English with what you know of the topic? I would be super-interested in reading a reasoned, opinionated account that (1) allowed me to write an implementation from only reading a reasoned description of the collector, and (2) allowed me to see what forces are at play in the design of a modern collector that is similar and/or different from the currently-in-use generational ones. You come across in your posts as articulate, thoughtful, insightful; I would love to read a mind-dump of you on the topic.
### reference implementation?
That's very flattering and graciously said, but I fear my gc enthusiasm makes me a loudmouth. I'm no where near an expert (except almost in the narrow area of weak refs where I've done thinking, coding, and writing). I'd recently designed another specific revision of an earlier copy gc, making it easy to test mentally against ideas mentioned here. (Eg I can see how finalizers interact with my gc.) But ultimately my gc's not far from Standard ML's copy gc from early 90's, which I imitated in a gc I wrote then for a Dylan prototype. Lately I've only added subtle wrinkles for specific useful reasons.
Matias Giovannini: Would you consider writing an overview/tutorial paper, or a literate program heavy on the English with what you know of the topic?
You asked that so nicely. Maybe, but I should code more and talk less first. I'm working on another rev of the basic runtime I like to use, with a focus on simplicity to get a Lisp-like assembler off the ground, to use for this thing I've been calling gyp. This time I'm using an MIT license; I might release early code after doing little more than gc and a parser to cause the gc to be used. I could annotate the code with the sort of literate commentary you want, but I'd hate to interrupt my flow if I get going.
A well done paper would need to consider and describe far more things than will actually appear in my next gc, and a nice academic style ought to be thorough and inclusive, and properly respectful of past contributions of others, etc, etc. You're not asking for this, I hope?
If you read gc source code in Standard ML, and also read Andrew Appel's very lucid account of Cheney style copy collection in Compiling with Continuations, then you'd understand my ideas roughly as diffs; but some diffs have interesting consequences.
For the gc, I work at the design level of raw memory that you'd tackle in assembler, but I prefer to simplify as much as possible using less painful C and C++ notation. (Enums are so much better to work with than integer constants.) But it's still very low level choices in memory layout, which must dovetail with the ways allocation and gc works. So picking formats directly affects options in gc behavior. For example, if you ask yourself how to represent weak semantics, you can go through a lot of variations, like I did. I promise I'll describe it sometime. But I don't want to abuse this forum.
### That's very flattering and
That's very flattering and graciously said, but I fear my gc enthusiasm makes me a loudmouth.
No, I don't think you're a loudmouth; on the contrary, I believe your enthusiasm makes you an excellent communicator of the issues involved. I enjoy reading your posts here, I would enjoy reading a hard-copy of a collated account of your experience. Of course, I don't want to put you in the obligation of producing anything at all, not code, much less an explanation; but I would love if you would, that's all.
A well done paper would need to consider and describe far more things than will actually appear in my next gc, and a nice academic style ought to be thorough and inclusive, and properly respectful of past contributions of others, etc, etc. You're not asking for this, I hope?
Of course not, not in the least. An academic paper (which might or might not be good) is, I think, a very specific cultural artifact that is in dialog with other papers, building upon them and expanding its field, and it has to explicitely acknowledge that debt, or heritage, or continuity. A technical note, if you will, can perfectly obviate the need for the usual Abstract--Intro--Related--Exposition--Conclusion--References structure (an inescapable straightjacket, I'm afraid); much as survey papers don't need to present original research, and often are no more than a commented list of references. A TN can be loud, and opinionated, and quirky; it's not that it's of lesser quality, but more of a different category altogether.
Again, far be from me to impose on you.
### gc technote pending
Your technote idea made the scope sound manageable, so I started one yesterday. But it's taking longer than I expected, so I wanted to post this note to say thanks for the suggestion first. (I've no projected completion date yet; I'm moving this holiday weekend, otherwise I'd probably get it done.)
Yesterday's first partial draft of a technote used an experimental style (very brief exploded details) that didn't gel at all. The result was a big pile of details missing patterns and flow; it seems I factored out the wrong part. So the next draft will aim for cogent overview as a priority.
I can justify (to myself) the cost of writing a technote as follows. I plan to release code using gc under terms letting anyone change it anyway they like. So if I explain why it works, folks will be less likely to seriously erode or impair the mechanism through poorly conceived alterations. :-)
But a very spare TN is my goal; it's tempting to do a better job than necessary, partly from joy in writing and partly from love of beauty in the subject matter. And brevity is better if one hopes to get the essential ideas well embedded in more minds. It won't be quite as fun as a core dump.
### When discussing papers that
When discussing papers that were previously discussed on LtU, please try to link to the LTU discussion instead of directly to the paper. Thanks!
|
2017-09-23 23:31:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42185938358306885, "perplexity": 1998.4009524405326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689806.55/warc/CC-MAIN-20170923231842-20170924011842-00001.warc.gz"}
|
https://www.allaboutcircuits.com/news/the-best-wire-for-breadboarding/
|
Get wired.
We've all seen it: breadboards covered in a rat's net of wires. There are tons of tips and tricks to breadboard wiring, but let's start simple: what's the best wire to breadboard with?
### Solid Core Wire
Probably the most common breadboarding wire is simple solid core wire. This is typically sold in spools of varying lengths and many different colors. The commonly recommended size for wire associated with bread boarding is 22awg or 0.8mm.
Pros:
• Cheap- Typically found for only a few dollars for a spool of 25 feet.
• Colorful- Since it's available in a variety of colors, there isn't an excuse to not at least color code your power wires. Color coding your signal wires can also help keep everything straight.
• Length- Solid core wire needs to be cut to length, so the correct sized wire can always be obtained.
Cons:
• Cutting and stripping- Since this type of wire comes on a spool, cutting and stripping to length is required. These two steps can add to the amount of time it takes to complete a project.
• Breaking- Solid core wire can break off inside of the holes on a breadboard; these broken wires are often very difficult or impossible to remove.
### Pre-cut Wires
Many bread boards often come with assortments of pre-cut and bent wires, often with tinned leads. These wires come in a few different colors and sizes. Typically, the color of the wire also denotes the length of it. These assortments come from many different vendors, but often are all very similar.
Pros:
• Various lengths- There are typically 8 or 9 different pre-cut lengths in these assortments.
• Tinning- The tinning on the ends of the jumper wires make them more durable than non-tinned wires.
• Case- This type of wire typically comes in a handy clear organization case.
• Cutable- Since these are still 22/23awg wire at their core, they are able to be cut to a desired length.
Cons:
• Color Coding- Since the color of the wires represent their length, a long red wire would be be an option.
• Cost- These pre-formed wires often can be more expensive than stripping your own.
• Fragile- While not a fragile as normal wires these can also break off in the holes of a breadboard if they are stressed to much.
### Male to Male Jumpers
Another flavor of breadboard wire that is gaining traction are wires with header pins attached on both ends. These wires benefit from being substantially more durable than other types.
Pros:
• Durable- These cables are far less likely to snap off in the holes of a breadboard than solid code wires.
• Flexible- The cable part of these is often stranded core wires, which offer more flexibility and durability from a cable standpoint.
• Colors- These often come in many different colors so that color coding your wires is possible.
Cons:
• Lengths- These often are sold in only 1 or 2 different lengths, leading to large loops on breadboards.
• Cost- As the most expensive option on this list, you don't get as many jumpers for your money.
### Avoid These:
• Stranded Wire. Stranded wires makes wiring breadboards very difficult due to stray strands and issues with the spring contacts inside of the breadboard not gripping.
• Enamel Coated Wire. Often called magnet wire, this type of wire is often hard to strip.
• Thin Wire. Stick to solid core wire with a gauge of around 22awg for best results. Thin wire can be difficult to strip and brittle.
With decisions such as color, cost, durability, and length to consider for your breadboard wires, they aren't a one-size-fits-all option. For a student whose project may be moved around, more durable wires, such as male to male jumpers, may be needed, while for professional use, solid core wire delivers a more professional look.
• fela 2016-01-09
Well, the best is probably to use pre-cut wires for whort distances, and male-to-male wires for longer distances. It’s good to have at least these two types. Well, the price actually is not a problem, as they can be used almost forever…
• BillJames 2016-01-22
Another alternative in a pinch is to use the individual conductors from the twisted pairs in cat5 cable. Just cut a foot or so off the cable and then pull the four 22ga twisted pair conductors out. On a per foot basis, this is also the most cost effective (about $0.01 per foot per individual cat5 conductor vs. about$0.14 per foot for the wire shown in the article).
• Karnovski 2016-01-22
I use solid core cat5 as well. It is cheap and works very well. Just make sure to buy the solid core version, not the stranded version.
• uwezi 2016-01-24
I find the pre-cut wires from you list above the most unusable alternative and really cannot understand why anyone would use them.
I prefer the male-to-male jumpers, but I have experienced quite impressive differences in quality. In my last order from a Swedish reseller of these chinese mass products, the majority of the yellow wires did not conduct at all. I tried to find the cause of the problem, but both ends of the cable appeared to be perfectly terminated. And these wires also come in different quality when it comes to the acutally contact pin…
• col_panek 2016-01-28
I’ve got a piece of 50-pair telephone cable, and used it for the last 40 years or so. It’s a little thin, and not tin plated, but it strips easy, is color-coded, and cheap.
• kjmclark 2016-04-01
Male-male jumpers. Definitely. You can buy them in lots of different lengths, and it’s best to have a variety of both colors and lengths. People should also be aware that there are male to male jumpers that use a kind of dupont crimped-on male (normal female-female wires, that are the female version of the male-to-male pictured, are dupont connector female-female). Those dupont male-male jumpers are not very good. I have a few of them, and hate them. They always feel flimsy and loose - not quite sure why - whereas the male-male jumpers pictured above always seem reliably tight in breadboards and female headers.
BTW, the best way I’ve found to keep those male-male jumpers organized is a combination of the twist-tie shown in the picture and clear plastic bags. I use the twist-ties to keep jumpers of the same length together, and clear ziplocs to hold a bunch of sets of jumpers. They tend to wander badly and get lost otherwise.
|
2017-09-25 09:46:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31277114152908325, "perplexity": 3138.0756679728693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690591.29/warc/CC-MAIN-20170925092813-20170925112813-00647.warc.gz"}
|
https://dml.cz/handle/10338.dmlcz/141701
|
# Article
Full entry | PDF (0.2 MB)
Keywords:
Laplace operator; multiplicative perturbation; Dirichlet problem; Friedrichs extension; purely discrete spectra; purely continuous spectra
Summary:
We investigate the spectral properties of the differential operator $-r^s \Delta$, $s\ge 0$ with the Dirichlet boundary condition in unbounded domains whose boundaries satisfy some geometrical condition. Considering this operator as a self-adjoint operator in the space with the norm $\|u\|^2_{L_{2, s} (\Omega )}= \int _{\Omega } r^{-s} |u|^2 {\rm d} x$, we study the structure of the spectrum with respect to the parameter $s$. Further we give an estimate of the rate of condensation of discrete spectra when it changes to continuous.
References:
[1] T., Lewis R.: Singular elliptic operators of second order with purely discrete spectra. Trans. Am. Math. Soc. 271 (1982), 653-666. DOI 10.1090/S0002-9947-1982-0654855-X | MR 0654855 | Zbl 0507.35069
[2] M., Eidus D.: The perturbed Laplace operator in a weighted $L\sp 2$-space. J. Funct. Anal. 100 (1991), 400-410. DOI 10.1016/0022-1236(91)90117-N | MR 1125232 | Zbl 0762.35020
[3] A., Ladyzhenskaya O., N., Uraltseva N.: Linear and Quasilinear Equations of Elliptic Type. Second edition, revised. Nauka, Moskva (1973), 576 Russian. MR 0509265
[4] M., Glazman I.: Direct Methods of Qualitative Spectral Analysis of Singular Differential Operators. Oldbourne Press, London (1965), 234. MR 0190800 | Zbl 0143.36505
[5] A., Berezin F., A., Shubin M.: The Schrodinger Equation. Moskov. Gos. Univ., Moskva (1983), 392 Russian. MR 0739327
[6] M., Abramowitz, I.A., Stegun: Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables. Dover Publications (1964), 1058. MR 1225604 | Zbl 0171.38503
[7] M., Landis E.: On some properties of solutions of elliptic equations. Dokl. Akad. Nauk SSSR 107 (1956), 640-643 Russian. MR 0078557 | Zbl 0075.28201
Partner of
|
2021-02-27 21:45:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9630215764045715, "perplexity": 1948.7299892316887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359497.20/warc/CC-MAIN-20210227204637-20210227234637-00318.warc.gz"}
|
http://openstudy.com/updates/513bb187e4b01c4790d21cee
|
## inkyvoyd How many positive integers N<1000 are there, such that 3^N-N^3 is a multiple of 5? one year ago one year ago
1. inkyvoyd
I rewrote it as modular arithmetic 3^N mod 10=N^3 mod 10 or |3^N mod 10 -N^3 mod 10|=5 But how do I solve this?
2. wio
If you do mod 10, the the solutions are $5-0 =5\\ 6-1=5\\ 7-2=5\\ 8-3=5\\ 9-4=5$
3. wio
Hmmm, so I mean it eliminates what they can be... a bit.
4. inkyvoyd
this problem is from brilliant.org . I have no idea what I'm supposed to do to solve it, but I know it's number theory. I tried factoring the expression into difference of cubes, bu that did not work,
5. wio
Are you sure that that modular arithmetic works?
6. inkyvoyd
Uh, no lol
7. wio
You're just checking that the last digit is the same...
8. inkyvoyd
yeah, but how do I even solve this problem? do you have any idea or hints? xD
9. wio
Honest, I'm not sure, but I do think it probably involve modular arithmetic.
10. inkyvoyd
Okay. Rawr. :/
11. wio
Hmmm, well we definitely have to leverage the fact that if it is a multiple of five, then it's last digit is either 0 or 5, right?
12. inkyvoyd
yeah. I got that part, which is why I was able to reformulate the problem in terms of modular arithmetic, which I have no knowledge of...
13. inkyvoyd
mm. So we solve both equations for n, and subtract the intersections?
14. wio
Anyway properties of mod that deal with addition?
15. inkyvoyd
uhh, I have no idea what I'm doing here lol
16. wio
Okay I messed up. It should be:$3^n-n^3 \equiv 0\pmod{10}\\ 3^n-n^3 \equiv 5\pmod{10}$
17. inkyvoyd
how does one solve these equations? if it takes too long to explain, do you know of any links where I could learn more?
18. wio
You'd have to learn modular arithmetic/number theory.
19. inkyvoyd
It's not fair :3
20. inkyvoyd
can you tell me the answer? :3
21. wio
I'm still trying to figure it out myself.
22. wio
Think about what happens to the last digit of the $$3^n$$ terms... each time it is multiplied by 3.
23. wio
When $$n=0$$, it is $$3^n = 1\pmod{10}$$
24. inkyvoyd
oh god I think I figured out how to solve this problem then
25. inkyvoyd
well you showed me how I mean
26. wio
Okay I programmed a bit and got these patterns: $n^3\\ 0^3 \equiv 0 \pmod{10}\\ 1^3 \equiv 1 \pmod{10}\\ 2^3 \equiv 8 \pmod{10}\\ 3^3 \equiv 7 \pmod{10}\\ 4^3 \equiv 4 \pmod{10}\\ 5^3 \equiv 5 \pmod{10}\\ 6^3 \equiv 6 \pmod{10}\\ 7^3 \equiv 3 \pmod{10}\\ 8^3 \equiv 2 \pmod{10}\\ 9^3 \equiv 9 \pmod{10}\\ 3^n\\ 3^0 \equiv 1 \pmod{10}\\ 3^1 \equiv 3 \pmod{10}\\ 3^2 \equiv 9 \pmod{10}\\ 3^3 \equiv 7 \pmod{10}\\ 3^4 \equiv 1 \pmod{10}\\ 3^5 \equiv 3 \pmod{10}\\ 3^6 \equiv 9 \pmod{10}\\ 3^7 \equiv 7 \pmod{10}\\ 3^8 \equiv 1 \pmod{10}\\ 3^9 \equiv 3 \pmod{10}\\$
27. inkyvoyd
then we just subtract the two?
28. wio
Well, notice something interesting?
29. wio
We know that the $$n^3$$ begins to cycle after 10, while the $$3^n$$ begins to cycle after 4.
30. inkyvoyd
ugh, the lcm of those is 20, so we'll have to check all solutions for up to 20, then multiply by 1000/20?
31. wio
That's what I'm thinking...
32. wio
Here, I'll write a program to do it up to the first 20... brb.
33. wio
When I go to 20....$\begin{array}{rlcrcl} 3:& 3^{3} &-& 3^4 &\equiv& 0 \pmod{10}\\ 17:& 3^{17} &-& 17^4 &\equiv& 0 \pmod{10}\\ \end{array}$
34. wio
When I go to 100 $\begin{array}{rlcrcl} 3:& 3^{3} &-& 3^4 &\equiv& 0 \pmod{10}\\ 17:& 3^{17} &-& 17^4 &\equiv& 0 \pmod{10}\\ 23:& 3^{23} &-& 23^4 &\equiv& 0 \pmod{10}\\ 37:& 3^{37} &-& 37^4 &\equiv& 0 \pmod{10}\\ 43:& 3^{43} &-& 43^4 &\equiv& 0 \pmod{10}\\ 57:& 3^{57} &-& 57^4 &\equiv& 0 \pmod{10}\\ 63:& 3^{63} &-& 63^4 &\equiv& 0 \pmod{10}\\ 77:& 3^{77} &-& 77^4 &\equiv& 0 \pmod{10}\\ 83:& 3^{83} &-& 83^4 &\equiv& 0 \pmod{10}\\ 97:& 3^{97} &-& 97^4 &\equiv& 0 \pmod{10}\\ \end{array}$
35. wio
The only problem here is that I'm not sure if Javascript can handle numbers as large as $$3^{100}$$
36. inkyvoyd
I have mathematica - but I don't know how to use it lol
37. wio
But we seems to consistently 1/10 to be congruent... meaning that if we do 0 to 999, we'd get 100.
38. wio
Whoops.... I was never letting c go to above 10... hmmm I'm gonna have to try this...
39. wio
Okay, so i'm getting about 4 for every 20... that is to say 1/5 of them seem to work.
40. inkyvoyd
okay...
41. wio
Okay so here is what I did.... I kept track of $$3^n\mod{10}$$ as well as $$n^3\mod{10}$$
42. wio
So let's just say : $3^n \equiv a \pmod{10}\\ n^3 \equiv b \pmod{10}$
43. wio
Now in the case that $$a>b$$ you can just subtract them. In the case where $$a<b$$ you have to add $$10$$ to $$a$$ first, then subtract.
44. wio
Does that make sense?
45. inkyvoyd
Yeah. Wait, did you program this brute-force?
46. wio
Here look:$\begin{array}{rrrrrl} n & a & b & a-b & r & \text{pass/fail}\\ 0 & 1 & 0 & 1 & 1 & \text{fail}\\ 1 & 3 & 1 & 2 & 2 & \text{fail}\\ 2 & 9 & 8 & 1 & 1 & \text{fail}\\ 3 & 7 & 7 & 0 & 0 & \text{pass}\\ 4 & 1 & 4 & -3 & 7 & \text{fail}\\ 5 & 3 & 5 & -2 & 8 & \text{fail}\\ 6 & 9 & 6 & 3 & 3 & \text{fail}\\ 7 & 7 & 3 & 4 & 4 & \text{fail}\\ 8 & 1 & 2 & -1 & 9 & \text{fail}\\ 9 & 3 & 9 & -6 & 4 & \text{fail}\\ 10 & 9 & 0 & 9 & 9 & \text{fail}\\ 11 & 7 & 1 & 6 & 6 & \text{fail}\\ 12 & 1 & 8 & -7 & 3 & \text{fail}\\ 13 & 3 & 7 & -4 & 6 & \text{fail}\\ 14 & 9 & 4 & 5 & 5 & \text{pass}\\ 15 & 7 & 5 & 2 & 2 & \text{fail}\\ 16 & 1 & 6 & -5 & 5 & \text{pass}\\ 17 & 3 & 3 & 0 & 0 & \text{pass}\\ 18 & 9 & 2 & 7 & 7 & \text{fail}\\ 19 & 7 & 9 & -2 & 8 & \text{fail}\\ \end{array}\\ 4/20 = 1/5$
47. wio
Techincally, we determined you only needed the first 20.
48. wio
I could have done this by hand relatively quickly, but I am very prone to mistakes. That is why I programmed it.
49. inkyvoyd
I used python to get an answer of 200.
50. inkyvoyd
import numbers i=0 for n in range (1,1000): if (3**n-n**3)%5==0: i=i+1 print(i) Was that the answer you got? Good thing it agreed with our methods :)
51. inkyvoyd
Didn't actually need import numbers, but I mean funny how 5 lines of code can get a hour's worth of math work done :S
52. wio
I'm getting n/5 for all n that is a multiple of 20.
|
2014-04-16 22:58:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5701870918273926, "perplexity": 1476.6836480730146}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/a-question-about-polynomials-of-degree-2.763365/
|
# A question about polynomials of degree 2
1. Jul 25, 2014
### eric_999
Hey!
In my calculus book they claim that a second degree polynomial always can be rewritten as x^2 - a^2 or as x^2 + a^2, if you use an appropriate change of variable. I was thinking about how this works.
Let's say we have a second degree polynomial (on the general form?) ax^2 +bx + c = 0, then I can of course rewrite it as (x + (b/2a))^2 - (b/2a)^2 + c/a = 0. My question is if they mean that (x + (b/2a)) = u, and (b/2a)^2 + c/a = k, so we always can write it like either u^2 - k^2 or u^2 + k^2 depending on if k correpsonds to a postive or negative number?
Sorry if my explanation sucks but hope you understand what I mean! Thanks!
2. Jul 25, 2014
### AlephZero
That's correct. $u = x + \frac{b}{2a}$ makes the coefficient of $u$ zero. You can then write the constant term as $+k^2$ or $-k^2$ depending on whether it is positive or negative.
You might like to think about how factorizing $u^2 - k^2$ is similar to solving the quadratic equation $ax^2 + bx + c = 0$ by completing the square, and the fact that $u^2 + k^2$ has no real roots.
|
2018-02-25 14:37:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6406860947608948, "perplexity": 341.4188585902622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816462.95/warc/CC-MAIN-20180225130337-20180225150337-00145.warc.gz"}
|
http://mathoverflow.net/questions/107938/a-question-about-speisers-1934-result-on-the-riemann-hypothesis?sort=newest
|
# A question about Speiser's 1934 result on the Riemann hypothesis
A number of sources concerning Speiser's 1934 result state that the Riemann Hypothesis (RH) implies $\zeta'(s)\neq 0$ for all $0<\text{Re}(s)<1/2$. But I have seen some (possibly less reliable) sources without proof suggesting this is an if and only if relationship, i.e. RH$\Longleftrightarrow\zeta'(s)\neq 0$. However, those (perhaps more reliable) sources state only forward implication, i.e. RH$\Longrightarrow\zeta'(s)\neq 0$. My question is this: is Speiser's result an if and only if relationship or not?
-
|
2015-09-04 19:04:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9156643748283386, "perplexity": 1376.7552494946017}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645359523.89/warc/CC-MAIN-20150827031559-00080-ip-10-171-96-226.ec2.internal.warc.gz"}
|
http://www.qtp.ufl.edu/ofdft/problem/ofdft.shtml
|
### Orbital Free Density Functional Theory
In the case of WDM there is an issue with the technical problem of solving for the Kohn-Sham orbitals and eigenvalues. Since the computational load from the eigenvalue problem scales, in general, as order $$N^3$$ where $$N$$ is the number of orbitals, the growth in the number of non-negligibly occupied KS orbitals with increasing temperature is a clear computational bottleneck. For complicated systems, the same bottleneck is encountered in ground-state simulations which use the DFT Born-Oppenheimer energy surface to drive the ionic dynamics.
One result has been the emergence of active research on orbital-free DFT (OFDFT), that is, approximate functionals for the ingredients of the KS free energy, namely the KS KE $${\mathcal T}_s$$, entropy $${\mathcal S}_s$$, and XC free energy $$\mathcal{F}_{xc}$$ or their ground-state counterparts. Almost all of this effort has been for ground-state OFKE functionals. With OFDFT functionals in hand, the DFT extremum condition yields a single non-linear Euler equation for the density which supplants the KS equation for the orbitals. (Note that most of the OFKE literature invokes the KS separation of the KE in order to use existing $$E_{xc}$$ approximations consistently.)
The finite-temperature OFDFT work is dominated by variants on Thomas-Fermi-von Weizsäcker theory. That type of theory, however, is known (on both fundamental and computational grounds) to be no more than qualitatively accurate in many circumstances. For more refined approximate OFDFT functionals at finite temperature, little or nothing is known about their accuracy for realistic systems and there is little fiduciary data for comparison.
|
2017-11-19 06:50:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8660669326782227, "perplexity": 1334.2976460956336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805417.47/warc/CC-MAIN-20171119061756-20171119081756-00082.warc.gz"}
|
https://openstax.org/books/university-physics-volume-2/pages/16-summary
|
University Physics Volume 2
# Summary
### 16.1Maxwell’s Equations and Electromagnetic Waves
• Maxwell’s prediction of electromagnetic waves resulted from his formulation of a complete and symmetric theory of electricity and magnetism, known as Maxwell’s equations.
• The four Maxwell’s equations together with the Lorentz force law encompass the major laws of electricity and magnetism. The first of these is Gauss’s law for electricity; the second is Gauss’s law for magnetism; the third is Faraday’s law of induction (including Lenz’s law); and the fourth is Ampère’s law in a symmetric formulation that adds another source of magnetism, namely changing electric fields.
• The symmetry introduced between electric and magnetic fields through Maxwell’s displacement current explains the mechanism of electromagnetic wave propagation, in which changing magnetic fields produce changing electric fields and vice versa.
• Although light was already known to be a wave, the nature of the wave was not understood before Maxwell. Maxwell’s equations also predicted electromagnetic waves with wavelengths and frequencies outside the range of light. These theoretical predictions were first confirmed experimentally by Heinrich Hertz.
### 16.2Plane Electromagnetic Waves
• Maxwell’s equations predict that the directions of the electric and magnetic fields of the wave, and the wave’s direction of propagation, are all mutually perpendicular. The electromagnetic wave is a transverse wave.
• The strengths of the electric and magnetic parts of the wave are related by $c=E/B,c=E/B,$ which implies that the magnetic field B is very weak relative to the electric field E.
• Accelerating charges create electromagnetic waves (for example, an oscillating current in a wire produces electromagnetic waves with the same frequency as the oscillation).
### 16.3Energy Carried by Electromagnetic Waves
• The energy carried by any wave is proportional to its amplitude squared. For electromagnetic waves, this means intensity can be expressed as
$I=cε0E022I=cε0E022$
where I is the average intensity in $W/m2W/m2$ and $E0E0$ is the maximum electric field strength of a continuous sinusoidal wave. This can also be expressed in terms of the maximum magnetic field strength $B0B0$ as
$I=cB022μ0I=cB022μ0$
and in terms of both electric and magnetic fields as
$I=E0B02μ0.I=E0B02μ0.$
The three expressions for $IavgIavg$ are all equivalent.
### 16.4Momentum and Radiation Pressure
• Electromagnetic waves carry momentum and exert radiation pressure.
• The radiation pressure of an electromagnetic wave is directly proportional to its energy density.
• The pressure is equal to twice the electromagnetic energy intensity if the wave is reflected and equal to the incident energy intensity if the wave is absorbed.
### 16.5The Electromagnetic Spectrum
• The relationship among the speed of propagation, wavelength, and frequency for any wave is given by $v=fλ,v=fλ,$ so that for electromagnetic waves, $c=fλ,c=fλ,$ where f is the frequency, $λλ$ is the wavelength, and c is the speed of light.
• The electromagnetic spectrum is separated into many categories and subcategories, based on the frequency and wavelength, source, and uses of the electromagnetic waves.
Order a print copy
As an Amazon Associate we earn from qualifying purchases.
|
2021-04-21 04:53:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8664533495903015, "perplexity": 389.63135643199905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039508673.81/warc/CC-MAIN-20210421035139-20210421065139-00192.warc.gz"}
|
https://physics.stackexchange.com/questions/384190/given-an-irreversible-process-from-a-to-b-is-there-a-reversible-process-of-the
|
# Given an irreversible process from A to B, is there a reversible process of the same type from A to B?
I'm an undergraduate student, and I'm facing problems with entropy for the first time.
The word "adiathermal" is used here, as suggested in the comments, in order to describe a process occurring without any flow of heat and matter. "Adiabatic" was the word used at first. (Use the word you prefer, the meaning is what matters)
I know that if I consider an irreversible adiathermal process from A to B, the change in entropy $\Delta S_{AB}$ is more than $0$, and that if I consider a reversible adiathermal process starting from A, its path will not pass through B (and its difference of entropy will be $\Delta S_{AP} = 0$, where $P$ is its ending point).
My question is: when I have an irreversible $x$ process (where $x =$ isothermal, isobaric, etc...) from A to B, is there any reversible $x$ process going from state A to state B? I.e. Irreversible isothermal process from A to B, is there a reversible isothermal process from A to B?
As explained before, I have found a counterexample for $x =$ adiathermal, so if I want to evaluate $\Delta S_{AB}$ I need to find another reversible path from A to B (i.e. rev. isobaric + rev. isothermal).
I think such $x$ process does not exist, otherwise we would just use Clausius integral ($\int_{A}^{B} \frac{\delta Q}{T}$) without noticing any difference between a reversible and irreversible process. Furthermore, I think there should be some deeper concept that I'm missing here.
• @diracula thanks, I'll correct the question. The terminology aspect is due to the fact my book and my lessons are not in English, so my translation was just wrong! – moonknight Feb 4 '18 at 11:43
• @diracula Never heard of this distinction. What is your source? – valerio Feb 4 '18 at 12:05
• @valerio92 I did not mean to suggest this is the common usage, nor that the question should be altered, only that it is something the asker may want to look out for when reading. It occurs in for example Blundell and Blundell's Concepts in Thermal Physics. I'll delete my comment as it is causing confusion. – diracula Feb 4 '18 at 12:47
• No problem, any suggestion is accepted. I have edited the answer only to make my question more understandable. – moonknight Feb 4 '18 at 13:02
## 2 Answers
Disclaimer: I don't agree with diracula's remark that a generic process with no heat exchange should be called "adiathermal", and only a reversible "adiathermal" process is called adiabatic. To me, adiabatic = no heat exchange, like in all thermodynamic texts that I know of. Therefore, I will just use the term "adiabatic".
My question is: when I have an irreversible $x$ process (where $x$= isothermal, isobaric, etc...) from A to B, is there any reversible $x$ process going from state A to state B?
The answer is yes, and it is fairly easy to see this.
Let $x$ be the thermodynamic variable that you want to keep constant. Now let's work in the $xy$ plane, where $y$ is any other thermodynamic variable describing the system.
If the state points $A$ and $B$ are connected by an irreversible iso-$x$ proces, then $x(A)=x(B)$: in other words, they lay on an horizontal line in the $xy$ plane.
Is there a reversible iso-$x$ process connecting $A$ and $B$? Sure. Just draw a continuous segment connecting $A$ and $B$. This segment is horizontal in the $xy$ plane, i.e. is an iso-$x$ process. That is the reversible process you are looking for.
I know that if I consider an irreversible adiabatic process from A to B, the change in entropy $\Delta S_{AB}$ is more than $0$ and that if I consider a reversible adiabatic process starting from A, its path will not pass through B (and its difference of entropy will be $\Delta S_{AP} = 0$, where $P$ is its ending point)
The case of an adiabatic process is different: "adiabatic" means that $\delta Q=0$, but doesn't correspond in general to any iso-$x$ process, unless we restrict ourselves to reversible processes: in this case, $\delta Q = T dS$ and therefore adiabatic=iso-$S$.
Your analysis for an adiabatic process is right. For an irreversible adiabatic process between A and B, as a consequence of Clausius' inequality we have
$$\Delta S_{AB} > \int_A^B \frac{\delta Q} T = 0 \longrightarrow \Delta S_{AB} > 0$$
while for a reversible process we would have
$$\Delta S_{AB} = \int_A^B \frac{\delta Q} T = 0 \longrightarrow \Delta S_{AB} = 0$$
Since entropy is a state function, $\Delta S_{AB}$ is independent on the path connecting A and B. Therefore, this means that we cannot find both an irreversible and a reversible adiabatic process connecting the same points A and B.
• Thanks for the answer. About the second quote: I meant $\Delta S_{AP}$, I have corrected my question. I am not confusing the entropy change of the system with the entropy change of sys+surroundings, I only stated that entropy change of the system is $0$ for every reversible adiabatic process, whereas given an irreversible adiabatic process from A to B, there is no rev. adiabatic process going from A to B so I have to use a combination of different rev. processes in order to go from A to B (and then I can evaluate $\Delta S_{AB}$ because entropy is a state function as you said). – moonknight Feb 4 '18 at 12:35
• about the first quote: so the answer is "yes" for iso-x processes and "no" for adiabatic processes? – moonknight Feb 4 '18 at 13:04
• @moonknight Yes, this is correct. I updated the answer. – valerio Feb 4 '18 at 13:22
I have a different perspective than @valerio92. My perspective is that it depends on (a) how you define an irreversible isobaric- or isothermal process and (b) whether you are including just the system, or the system plus surroundings. Let's assume that the answer to (b) is that you are including just the system. Now, let's consider (a).
If you define an irreversible isobaric process as one in which the initial pressure is $p_0$, and the external force per unit area applied to the system is also $p_0$ for the entire process path, while you vary the temperature at the system boundary in some arbitrary way T(t) between $T_0$ and $T_1$, then the entropy change for this path will be the same as for a reversible path between the same two end (equilibrium) states. However, during the irreversible path, there will be entropy generated within the system which is transferred through the boundary to the surroundings during the process, over and above the entropy that is exchanged with the surroundings during the reversible path.
A second definition of an irreversible isobaric process is one in which the initial pressure of the system is $p_0$, but, at time zero, you suddenly change the external force per unit area applied to the system to $p_1$, and hold it constant at this value for all subsequent times; in this scenario, we also assume that the temperature of the boundary is held at $T_0$ for both the reversible and irreversible cases. In this case, the reversible process between the two end states would, of course, be quite different, but the entropy change would be the same. Here again, entropy would be generated within the system for the irreversible change, and this entropy would be transferred through the boundary to the surroundings during the process, over and above the entropy that is exchanged with the surroundings during the reversible path.
The same types of considerations could be applied to what we call isothermal processes. So, the real question is, what do you define as an isothermal or isobaric process?
|
2020-07-11 08:23:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8365811109542847, "perplexity": 231.40499928328194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655924908.55/warc/CC-MAIN-20200711064158-20200711094158-00336.warc.gz"}
|
https://www.rdocumentation.org/packages/rospca/versions/1.0.4/topics/robpca
|
# robpca
0th
Percentile
##### ROBust PCA algorithm
ROBPCA algorithm of Hubert et al. (2005) including reweighting (Engelen et al., 2005) and possible extension to skewed data (Hubert et al., 2009).
Keywords
multivariate, robust
##### Usage
robpca (x, k = 0, kmax = 10, alpha = 0.75, h = NULL, mcd = FALSE,
ndir = "all", skew = FALSE, ...)
##### Arguments
x
An $n$ by $p$ matrix or data matrix with observations in the rows and variables in the columns.
k
Number of principal components that will be used. When k=0 (default), the number of components is selected using the criterion in Hubert et al. (2005).
kmax
Maximal number of principal components that will be computed, default is 10.
alpha
Robustness parameter, default is 0.75.
h
The number of outliers the algorithm should resist is given by $n-h$. Any value for h between $n/2$ and $n$ may be specified. Default is NULL which uses h=ceiling(alpha*n)+1. Do not specify alpha and h at the same time.
mcd
Logical indicating if the MCD adaptation of ROBPCA may be applied when the number of variables is sufficiently small (see Details). If mcd=FALSE (default), the full ROBPCA algorithm is always applied.
ndir
Number of directions used when computing the outlyingness (or the adjusted outlyingness when skew=TRUE), see outlyingness and adjOutl for more details.
skew
Logical indicating if the version for skewed data (Hubert et al., 2009) is applied, default is FALSE.
...
Other arguments to pass to methods.
##### Details
This function is based extensively on PcaHubert from rrcov and there are two main differences:
The outlyingness measure that is used for non-skewed data (skew=FALSE) is the Stahel-Donoho measure as described in Hubert et al. (2005) which is also used in PcaHubert. The implementation in mrfDepth (which is used here) is however much faster than the one in PcaHubert and hence more, or even all, directions can be considered when computing the outlyingness measure.
Moreover, the extension for skewed data of Hubert et al. (2009) (skew=TRUE) is also implemented here, but this is not included in PcaHubert.
For an extensive description of the ROBPCA algorithm we refer to Hubert et al. (2005) and to PcaHubert.
When mcd=TRUE and $n<5 \times p$, we do not apply the full ROBPCA algorithm. The loadings and eigenvalues are then computed as the eigenvectors and eigenvalues of the MCD estimator applied to the data set after the SVD step.
##### Value
A list with components:
loadings
Loadings matrix containing the robust loadings (eigenvectors), a numeric matrix of size $p$ by $k$.
eigenvalues
Numeric vector of length $k$ containing the robust eigenvalues.
scores
Scores matrix (computed as $(X-center) \cdot loadings)$, a numeric matrix of size $n$ by $k$.
center
Numeric vector of length $k$ containing the centre of the data.
k
Number of (chosen) principal components.
H0
Logical vector of size $n$ indicating if an observation is in the initial h-subset.
H1
Logical vector of size $n$ indicating if an observation is kept in the reweighting step.
alpha
The robustness parameter $\alpha$ used throughout the algorithm.
h
The $h$-parameter used throughout the algorithm.
sd
Numeric vector of size $n$ containing the robust score distances within the robust PCA subspace.
od
Numeric vector of size $n$ containing the orthogonal distances to the robust PCA subspace.
cutoff.sd
Cut-off value for the robust score distances.
cutoff.od
Cut-off value for the orthogonal distances.
flag.sd
Numeric vector of size $n$ containing the SD-flags of the observations. The observations whose score distance is larger than cutoff.sd receive an SD-flag equal to zero. The other observations receive an SD-flag equal to 1.
flag.od
Numeric vector of size $n$ containing the OD-flags of the observations. The observations whose orthogonal distance is larger than cutoff.od receive an OD-flag equal to zero. The other observations receive an OD-flag equal to 1.
flag.all
Numeric vector of size $n$ containing the flags of the observations. The observations whose score distance is larger than cutoff.sd or whose orthogonal distance is larger than cutoff.od can be considered as outliers and receive a flag equal to zero. The regular observations receive flag 1.
##### References
Hubert, M., Rousseeuw, P. J., and Vanden Branden, K. (2005), ROBPCA: A New Approach to Robust Principal Component Analysis,'' Technometrics, 47, 64--79.
Engelen, S., Hubert, M. and Vanden Branden, K. (2005), A Comparison of Three Procedures for Robust PCA in High Dimensions", Austrian Journal of Statistics, 34, 117--126.
Hubert, M., Rousseeuw, P. J., and Verdonck, T. (2009), Robust PCA for Skewed Data and Its Outlier Map," Computational Statistics & Data Analysis, 53, 2264--2274.
##### See Also
PcaHubert, outlyingness, adjOutl
• robpca
##### Examples
# NOT RUN {
X <- dataGen(m=1, n=100, p=10, eps=0.2, bLength=4)\$data[[1]]
resR <- robpca(X, k=2)
diagPlot(resR)
# }
Documentation reproduced from package rospca, version 1.0.4, License: GPL (>= 2)
### Community examples
Looks like there are no examples yet.
|
2020-07-13 11:27:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6209347248077393, "perplexity": 3127.2573753512306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657143365.88/warc/CC-MAIN-20200713100145-20200713130145-00185.warc.gz"}
|
http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CRBM.html
|
SHOGUN 4.2.0
CRBM Class Reference
## Detailed Description
A Restricted Boltzmann Machine.
An RBM is an energy based probabilistic model. It consists of two groups of variables: the visible variables $$v$$ and the hidden variables $$h$$. The key assumption that RBMs make is that the hidden units are conditionally independent given the visible units, and vice versa.
The energy function for RBMs with binary visible units is defined as:
$E(v,h) = - b^T v - c^T h - h^T Wv$
and for RBMs with gaussian (linear) visible units:
$E(v,h) = v^T v - b^T v - c^T h - h^T Wv$
where $$b$$ is the bias vector for the visible units, $$c$$ is the bias vector for the hidden units, and $$W$$ is the weight matrix.
The probability distribution is defined through the energy fucntion as:
$P(v,h) = \frac{exp(-E(v,h))}{\sum_{v,h} exp(-E(v,h))}$
The above definitions along with the independence assumptions result in the following conditionals:
$P(h=1|v) = \frac{1}{1+exp(-Wv-c)} \quad \text{for binary hidden units}$
$P(v=1|h) = \frac{1}{1+exp(-W^T h-b)} \quad \text{for binary visible units}$
$P(v|h) \sim \mathcal{N} (W^T h + b,1) \quad \text{for gaussian visible units}$
Note that when using gaussian visible units, the inputs should be normalized to have zero mean and unity standard deviation.
This class supports having multiple types of visible units in the same RBM. The visible units are divided into groups where each group can have its own type. The hidden units however are just one group of binary units.
Samples can be drawn from the model using Gibbs sampling.
Training is done using contrastive divergence [Hinton, 2002] or persistent contrastive divergence [Tieleman, 2008] (default).
Training progress can be monitored using the reconstruction error (default), which is the average squared difference between a training batch and the RBM's reconstruction of it. The reconstruction is generated using one step of gibbs sampling. Progress can also be monitored using the pseudo-log-likelihood which is an approximation to the log-likelihood. However, this is currently only supported for binary visible units.
The rows of the visible_state matrix are divided into groups, one for each group of visible units. For example, if we have 3 groups of visible units: group 0 with 10 units, group 1 with 5 units, and group 2 with 6 units, the states of group 0 will be stored in visible_state[0:10,:], the states of group 1 will stored in visible_state[10:15,:], and the states of group 2 will be stored in visible_state[15:21,:]. Note that the groups are numbered by the order in which they where added to the RBM using add_visible_group()
Definition at line 122 of file RBM.h.
Inheritance diagram for CRBM:
[legend]
## Public Member Functions
CRBM ()
CRBM (int32_t num_hidden)
CRBM (int32_t num_hidden, int32_t num_visible, ERBMVisibleUnitType visible_unit_type=RBMVUT_BINARY)
virtual ~CRBM ()
virtual void add_visible_group (int32_t num_units, ERBMVisibleUnitType unit_type)
virtual void initialize_neural_network (float64_t sigma=0.01)
virtual void set_batch_size (int32_t batch_size)
virtual void train (CDenseFeatures< float64_t > *features)
virtual void sample (int32_t num_gibbs_steps=1, int32_t batch_size=1)
virtual CDenseFeatures
< float64_t > *
sample_group (int32_t V, int32_t num_gibbs_steps=1, int32_t batch_size=1)
virtual void sample_with_evidence (int32_t E, CDenseFeatures< float64_t > *evidence, int32_t num_gibbs_steps=1)
virtual CDenseFeatures
< float64_t > *
sample_group_with_evidence (int32_t V, int32_t E, CDenseFeatures< float64_t > *evidence, int32_t num_gibbs_steps=1)
virtual void reset_chain ()
virtual float64_t free_energy (SGMatrix< float64_t > visible, SGMatrix< float64_t > buffer=SGMatrix< float64_t >())
virtual void free_energy_gradients (SGMatrix< float64_t > visible, SGVector< float64_t > gradients, bool positive_phase=true, SGMatrix< float64_t > hidden_mean_given_visible=SGMatrix< float64_t >())
virtual void contrastive_divergence (SGMatrix< float64_t > visible_batch, SGVector< float64_t > gradients)
virtual float64_t reconstruction_error (SGMatrix< float64_t > visible, SGMatrix< float64_t > buffer=SGMatrix< float64_t >())
virtual float64_t pseudo_likelihood (SGMatrix< float64_t > visible, SGMatrix< float64_t > buffer=SGMatrix< float64_t >())
virtual CDenseFeatures
< float64_t > *
visible_state_features ()
virtual SGVector< float64_tget_parameters ()
virtual SGMatrix< float64_tget_weights (SGVector< float64_t > p=SGVector< float64_t >())
virtual SGVector< float64_tget_hidden_bias (SGVector< float64_t > p=SGVector< float64_t >())
virtual SGVector< float64_tget_visible_bias (SGVector< float64_t > p=SGVector< float64_t >())
virtual int32_t get_num_parameters ()
virtual const char * get_name () const
virtual CSGObjectshallow_copy () const
virtual CSGObjectdeep_copy () const
virtual bool is_generic (EPrimitiveType *generic) const
template<class T >
void set_generic ()
template<>
void set_generic ()
template<>
void set_generic ()
template<>
void set_generic ()
template<>
void set_generic ()
template<>
void set_generic ()
template<>
void set_generic ()
template<>
void set_generic ()
template<>
void set_generic ()
template<>
void set_generic ()
template<>
void set_generic ()
template<>
void set_generic ()
template<>
void set_generic ()
template<>
void set_generic ()
template<>
void set_generic ()
template<>
void set_generic ()
void unset_generic ()
virtual void print_serializable (const char *prefix="")
virtual bool save_serializable (CSerializableFile *file, const char *prefix="")
virtual bool load_serializable (CSerializableFile *file, const char *prefix="")
void set_global_io (SGIO *io)
SGIOget_global_io ()
void set_global_parallel (Parallel *parallel)
Parallelget_global_parallel ()
void set_global_version (Version *version)
Versionget_global_version ()
SGStringList< char > get_modelsel_names ()
void print_modsel_params ()
char * get_modsel_param_descr (const char *param_name)
index_t get_modsel_param_index (const char *param_name)
void build_gradient_parameter_dictionary (CMap< TParameter *, CSGObject * > *dict)
virtual void update_parameter_hash ()
virtual bool parameter_hash_changed ()
virtual bool equals (CSGObject *other, float64_t accuracy=0.0, bool tolerant=false)
virtual CSGObjectclone ()
## Public Attributes
int32_t cd_num_steps
bool cd_persistent
bool cd_sample_visible
float64_t l2_coefficient
float64_t l1_coefficient
int32_t monitoring_interval
ERBMMonitoringMethod monitoring_method
int32_t max_num_epochs
int32_t gd_mini_batch_size
float64_t gd_learning_rate
float64_t gd_learning_rate_decay
float64_t gd_momentum
SGMatrix< float64_thidden_state
SGMatrix< float64_tvisible_state
SGIOio
Parallelparallel
Versionversion
Parameterm_parameters
Parameterm_model_selection_parameters
uint32_t m_hash
## Protected Member Functions
virtual void mean_hidden (SGMatrix< float64_t > visible, SGMatrix< float64_t > result)
virtual void mean_visible (SGMatrix< float64_t > hidden, SGMatrix< float64_t > result)
virtual void sample_hidden (SGMatrix< float64_t > mean, SGMatrix< float64_t > result)
virtual void sample_visible (SGMatrix< float64_t > mean, SGMatrix< float64_t > result)
virtual void sample_visible (int32_t index, SGMatrix< float64_t > mean, SGMatrix< float64_t > result)
virtual void load_serializable_pre () throw (ShogunException)
virtual void load_serializable_post () throw (ShogunException)
virtual void save_serializable_pre () throw (ShogunException)
virtual void save_serializable_post () throw (ShogunException)
## Protected Attributes
int32_t m_num_hidden
int32_t m_num_visible
int32_t m_batch_size
int32_t m_num_visible_groups
CDynamicArray< int32_t > * m_visible_group_types
CDynamicArray< int32_t > * m_visible_group_sizes
CDynamicArray< int32_t > * m_visible_state_offsets
int32_t m_num_params
SGVector< float64_tm_params
## Friends
class CDeepBeliefNetwork
## Constructor & Destructor Documentation
CRBM ( )
default constructor
Definition at line 43 of file RBM.cpp.
CRBM ( int32_t num_hidden )
Constructs an RBM with no visible units. The visible units can be added later using add_visible_group()
Parameters
num_hidden Number of hidden units
Definition at line 48 of file RBM.cpp.
CRBM ( int32_t num_hidden, int32_t num_visible, ERBMVisibleUnitType visible_unit_type = RBMVUT_BINARY )
Constructs an RBM with a single group of visible units
Parameters
num_hidden Number of hidden units num_visible Number of visible units visible_unit_type Type of the visible units
Definition at line 54 of file RBM.cpp.
~CRBM ( )
virtual
Definition at line 62 of file RBM.cpp.
## Member Function Documentation
void add_visible_group ( int32_t num_units, ERBMVisibleUnitType unit_type )
virtual
Adds a group of visible units to the RBM
Parameters
num_units Number of visible units unit_type Type of the visible units
Definition at line 69 of file RBM.cpp.
void build_gradient_parameter_dictionary ( CMap< TParameter *, CSGObject * > * dict )
inherited
Builds a dictionary of all parameters in SGObject as well of those of SGObjects that are parameters of this object. Dictionary maps parameters to the objects that own them.
Parameters
dict dictionary of parameters to be built.
Definition at line 597 of file SGObject.cpp.
CSGObject * clone ( )
virtualinherited
Creates a clone of the current object. This is done via recursively traversing all parameters, which corresponds to a deep copy. Calling equals on the cloned object always returns true although none of the memory of both objects overlaps.
Returns
an identical copy of the given object, which is disjoint in memory. NULL if the clone fails. Note that the returned object is SG_REF'ed
Definition at line 714 of file SGObject.cpp.
void contrastive_divergence ( SGMatrix< float64_t > visible_batch, SGVector< float64_t > gradients )
virtual
Computes the gradients using contrastive divergence
Parameters
visible_batch States of the visible units gradients Array in which the results are stored. Length get_num_parameters()
Definition at line 355 of file RBM.cpp.
CSGObject * deep_copy ( ) const
virtualinherited
A deep copy. All the instance variables will also be copied.
Definition at line 198 of file SGObject.cpp.
bool equals ( CSGObject * other, float64_t accuracy = 0.0, bool tolerant = false )
virtualinherited
Recursively compares the current SGObject to another one. Compares all registered numerical parameters, recursion upon complex (SGObject) parameters. Does not compare pointers!
May be overwritten but please do with care! Should not be necessary in most cases.
Parameters
other object to compare with accuracy accuracy to use for comparison (optional) tolerant allows linient check on float equality (within accuracy)
Returns
true if all parameters were equal, false if not
Definition at line 618 of file SGObject.cpp.
float64_t free_energy ( SGMatrix< float64_t > visible, SGMatrix< float64_t > buffer = SGMatrix() )
virtual
Computes the average free energy on a given batch of visible unit states.
The free energy for a vector $$v$$ is defined as:
$F(v) = - log(\sum_h exp(-E(v,h))$
which yields the following (in vectorized form):
$F(v) = -b^T v - \sum log(1+exp(Wv+c)) \quad \text{for binary visible units}$
$F(v) = \frac{1}{2} v^T v - b^T v - \sum log(1+exp(Wv+c)) \quad \text{for gaussian visible units}$
Parameters
visible States of the visible units buffer A matrix of size num_hidden*batch_size. used as a buffer during computation. If not given, a new matrix is allocated and used as a buffer.
Returns
Average free energy over the given batch
Definition at line 272 of file RBM.cpp.
void free_energy_gradients ( SGMatrix< float64_t > visible, SGVector< float64_t > gradients, bool positive_phase = true, SGMatrix< float64_t > hidden_mean_given_visible = SGMatrix() )
virtual
Computes the gradients of the free energy function with respect to the RBM's parameters
Parameters
visible States of the visible units gradients Array in which the results are stored. Length get_num_parameters() positive_phase If true, the result vector is reset to zero and the gradients are added to it with a positive sign. If false, the result vector is not reset and the gradients are added to it with a negative sign. This is useful during contrastive divergence. hidden_mean_given_visible Means of the hidden states given the visible states. If not given, means will be computed by calling mean_hidden()
Definition at line 318 of file RBM.cpp.
SGIO * get_global_io ( )
inherited
get the io object
Returns
io object
Definition at line 235 of file SGObject.cpp.
Parallel * get_global_parallel ( )
inherited
get the parallel object
Returns
parallel object
Definition at line 277 of file SGObject.cpp.
Version * get_global_version ( )
inherited
get the version object
Returns
version object
Definition at line 290 of file SGObject.cpp.
SGVector< float64_t > get_hidden_bias ( SGVector< float64_t > p = SGVector() )
virtual
Returns the bias vector of the hidden units
Parameters
p If specified, the bias vector is extracted from it instead of m_params
Definition at line 580 of file RBM.cpp.
SGStringList< char > get_modelsel_names ( )
inherited
Returns
vector of names of all parameters which are registered for model selection
Definition at line 498 of file SGObject.cpp.
char * get_modsel_param_descr ( const char * param_name )
inherited
Returns description of a given parameter string, if it exists. SG_ERROR otherwise
Parameters
param_name name of the parameter
Returns
description of the parameter
Definition at line 522 of file SGObject.cpp.
index_t get_modsel_param_index ( const char * param_name )
inherited
Returns index of model selection parameter with provided index
Parameters
param_name name of model selection parameter
Returns
index of model selection parameter with provided name, -1 if there is no such
Definition at line 535 of file SGObject.cpp.
virtual const char* get_name ( ) const
virtual
Returns the name of the SGSerializable instance. It MUST BE the CLASS NAME without the prefixed C'.
Returns
name of the SGSerializable
Implements CSGObject.
Definition at line 346 of file RBM.h.
virtual int32_t get_num_parameters ( )
virtual
Returns the number of parameters
Definition at line 344 of file RBM.h.
virtual SGVector get_parameters ( )
virtual
Returns the parameter vector of the RBM
Definition at line 317 of file RBM.h.
SGVector< float64_t > get_visible_bias ( SGVector< float64_t > p = SGVector() )
virtual
Returns the bias vector of the visible units
Parameters
p If specified, the bias vector is extracted from it instead of m_params
Definition at line 590 of file RBM.cpp.
SGMatrix< float64_t > get_weights ( SGVector< float64_t > p = SGVector() )
virtual
Returns the weights matrix
Parameters
p If specified, the weight matrix is extracted from it instead of m_params
Definition at line 570 of file RBM.cpp.
void initialize_neural_network ( float64_t sigma = 0.01 )
virtual
Initializes the weights of the RBM. Must be called after all the visible groups have been added, and before the RBM is used.
Parameters
sigma Standard deviation of the gaussian used to initialize the weights
Definition at line 86 of file RBM.cpp.
bool is_generic ( EPrimitiveType * generic ) const
virtualinherited
If the SGSerializable is a class template then TRUE will be returned and GENERIC is set to the type of the generic.
Parameters
generic set to the type of the generic if returning TRUE
Returns
TRUE if a class template.
Definition at line 296 of file SGObject.cpp.
bool load_serializable ( CSerializableFile * file, const char * prefix = "" )
virtualinherited
Load this object from file. If it will fail (returning FALSE) then this object will contain inconsistent data and should not be used!
Parameters
file where to load from prefix prefix for members
Returns
TRUE if done, otherwise FALSE
Definition at line 369 of file SGObject.cpp.
void load_serializable_post ( ) throw ( ShogunException )
protectedvirtualinherited
Can (optionally) be overridden to post-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::LOAD_SERIALIZABLE_POST is called.
Exceptions
ShogunException will be thrown if an error occurs.
Definition at line 426 of file SGObject.cpp.
void load_serializable_pre ( ) throw ( ShogunException )
protectedvirtualinherited
Can (optionally) be overridden to pre-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::LOAD_SERIALIZABLE_PRE is called.
Exceptions
ShogunException will be thrown if an error occurs.
Definition at line 421 of file SGObject.cpp.
void mean_hidden ( SGMatrix< float64_t > visible, SGMatrix< float64_t > result )
protectedvirtual
Computes the mean of the hidden states given the visible states
Definition at line 450 of file RBM.cpp.
void mean_visible ( SGMatrix< float64_t > hidden, SGMatrix< float64_t > result )
protectedvirtual
Computes the mean of the visible states given the hidden states
Definition at line 468 of file RBM.cpp.
bool parameter_hash_changed ( )
virtualinherited
Returns
whether parameter combination has changed since last update
Definition at line 262 of file SGObject.cpp.
void print_modsel_params ( )
inherited
prints all parameter registered for model selection and their type
Definition at line 474 of file SGObject.cpp.
void print_serializable ( const char * prefix = "" )
virtualinherited
prints registered parameters out
Parameters
prefix prefix for members
Definition at line 308 of file SGObject.cpp.
float64_t pseudo_likelihood ( SGMatrix< float64_t > visible, SGMatrix< float64_t > buffer = SGMatrix() )
virtual
Computes an approximation to the pseudo-likelihood. See this tutorial for more details. Only works with binary visible units
Parameters
visible States of the visible units buffer A matrix of size num_visible*batch_size. used as a buffer during computation. If not given, a new matrix is allocated and used as a buffer.
Returns
Approximation to the average pseudo-likelihood over the given batch
Definition at line 420 of file RBM.cpp.
float64_t reconstruction_error ( SGMatrix< float64_t > visible, SGMatrix< float64_t > buffer = SGMatrix() )
virtual
Computes the average reconstruction error which is defined as:
$E = \frac{1}{N} \sum_i (v_i - \widetilde{v})^2$
where $$\widetilde{v}$$ is computed using one step of gibbs sampling and $$N$$ is the batch size
Returns
Average reconstruction error over the given batch
Definition at line 398 of file RBM.cpp.
void reset_chain ( )
virtual
Resets the state of the markov chain used for sampling, which is stored in the visible_state matrix, to random values
Definition at line 265 of file RBM.cpp.
void sample ( int32_t num_gibbs_steps = 1, int32_t batch_size = 1 )
virtual
Draws samples from the marginal distribution of the visible units using Gibbs sampling. The sampling starts from the values in the RBM's visible_state matrix and result of the sampling is stored there too.
Parameters
num_gibbs_steps Number of Gibbs sampling steps batch_size Number of samples to be drawn. A seperate chain is used for each sample
Definition at line 178 of file RBM.cpp.
CDenseFeatures< float64_t > * sample_group ( int32_t V, int32_t num_gibbs_steps = 1, int32_t batch_size = 1 )
virtual
Draws Samples from $$P(V)$$ where $$V$$ is one of the visible unit groups. The sampling starts from the values in the RBM's visible_state matrix and result of the sampling is stored there too.
Parameters
V Index of the visible unit group to be sampled num_gibbs_steps Number of Gibbs sampling steps batch_size Number of samples to be drawn. A seperate chain is used for each sample
Returns
Sampled states of group V
Definition at line 193 of file RBM.cpp.
CDenseFeatures< float64_t > * sample_group_with_evidence ( int32_t V, int32_t E, CDenseFeatures< float64_t > * evidence, int32_t num_gibbs_steps = 1 )
virtual
Draws Samples from $$P(V|E=evidence)$$ where $$E$$ is one of the visible unit groups and $$V$$ is another visible unit group. The sampling starts from the values in the RBM's visible_state matrix and result of the sampling is stored there too.
Parameters
V Index of the visible unit group to be sampled E Index of the evidence visible unit group evidence States of the evidence visible unit group num_gibbs_steps Number of Gibbs sampling steps
Returns
Sampled states of group V
Definition at line 245 of file RBM.cpp.
void sample_hidden ( SGMatrix< float64_t > mean, SGMatrix< float64_t > result )
protectedvirtual
Samples the hidden states according to the provided means
Definition at line 519 of file RBM.cpp.
void sample_visible ( SGMatrix< float64_t > mean, SGMatrix< float64_t > result )
protectedvirtual
Samples the visible states according to the provided means
Definition at line 526 of file RBM.cpp.
void sample_visible ( int32_t index, SGMatrix< float64_t > mean, SGMatrix< float64_t > result )
protectedvirtual
Samples one group of visible states according to the provided means
Definition at line 534 of file RBM.cpp.
void sample_with_evidence ( int32_t E, CDenseFeatures< float64_t > * evidence, int32_t num_gibbs_steps = 1 )
virtual
Draws Samples from $$P(V|E=evidence)$$ where $$E$$ is one of the visible unit groups and $$V$$ is all the visible unit excluding the ones in group $$E$$. The sampling starts from the values in the RBM's visible_state matrix and result of the sampling is stored there too.
Parameters
E Index of the evidence visible unit group evidence States of the evidence visible unit group num_gibbs_steps Number of Gibbs sampling steps
Definition at line 211 of file RBM.cpp.
bool save_serializable ( CSerializableFile * file, const char * prefix = ""` )
virtualinherited
Save this object to file.
Parameters
file where to save the object; will be closed during returning if PREFIX is an empty string. prefix prefix for members
Returns
TRUE if done, otherwise FALSE
Definition at line 314 of file SGObject.cpp.
void save_serializable_post ( ) throw ( ShogunException )
protectedvirtualinherited
Can (optionally) be overridden to post-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::SAVE_SERIALIZABLE_POST is called.
Exceptions
ShogunException will be thrown if an error occurs.
Reimplemented in CKernel.
Definition at line 436 of file SGObject.cpp.
void save_serializable_pre ( ) throw ( ShogunException )
protectedvirtualinherited
Can (optionally) be overridden to pre-initialize some member variables which are not PARAMETER::ADD'ed. Make sure that at first the overridden method BASE_CLASS::SAVE_SERIALIZABLE_PRE is called.
Exceptions
ShogunException will be thrown if an error occurs.
Definition at line 431 of file SGObject.cpp.
void set_batch_size ( int32_t batch_size )
virtual
Sets the number of train/test cases the RBM will deal with
Parameters
batch_size Batch size
Definition at line 95 of file RBM.cpp.
void set_generic ( )
inherited
Definition at line 41 of file SGObject.cpp.
void set_generic ( )
inherited
Definition at line 46 of file SGObject.cpp.
void set_generic ( )
inherited
Definition at line 51 of file SGObject.cpp.
void set_generic ( )
inherited
Definition at line 56 of file SGObject.cpp.
void set_generic ( )
inherited
Definition at line 61 of file SGObject.cpp.
void set_generic ( )
inherited
Definition at line 66 of file SGObject.cpp.
void set_generic ( )
inherited
Definition at line 71 of file SGObject.cpp.
void set_generic ( )
inherited
Definition at line 76 of file SGObject.cpp.
void set_generic ( )
inherited
Definition at line 81 of file SGObject.cpp.
void set_generic ( )
inherited
Definition at line 86 of file SGObject.cpp.
void set_generic ( )
inherited
Definition at line 91 of file SGObject.cpp.
void set_generic ( )
inherited
Definition at line 96 of file SGObject.cpp.
void set_generic ( )
inherited
Definition at line 101 of file SGObject.cpp.
void set_generic ( )
inherited
Definition at line 106 of file SGObject.cpp.
void set_generic ( )
inherited
Definition at line 111 of file SGObject.cpp.
void set_generic ( )
inherited
set generic type to T
void set_global_io ( SGIO * io )
inherited
set the io object
Parameters
io io object to use
Definition at line 228 of file SGObject.cpp.
void set_global_parallel ( Parallel * parallel )
inherited
set the parallel object
Parameters
parallel parallel object to use
Definition at line 241 of file SGObject.cpp.
void set_global_version ( Version * version )
inherited
set the version object
Parameters
version version object to use
Definition at line 283 of file SGObject.cpp.
CSGObject * shallow_copy ( ) const
virtualinherited
A shallow copy. All the SGObject instance variables will be simply assigned and SG_REF-ed.
Reimplemented in CGaussianKernel.
Definition at line 192 of file SGObject.cpp.
void train ( CDenseFeatures< float64_t > * features )
virtual
Trains the RBM
Parameters
features Input features. Should have as many features as there are visible units in the RBM.
Definition at line 107 of file RBM.cpp.
void unset_generic ( )
inherited
unset generic type
this has to be called in classes specializing a template class
Definition at line 303 of file SGObject.cpp.
void update_parameter_hash ( )
virtualinherited
Updates the hash of current parameter combination
Definition at line 248 of file SGObject.cpp.
virtual CDenseFeatures* visible_state_features ( )
virtual
Returns the states of the visible unit as CDenseFeatures<float64_t>
Definition at line 311 of file RBM.h.
## Friends And Related Function Documentation
friend class CDeepBeliefNetwork
friend
Definition at line 124 of file RBM.h.
## Member Data Documentation
int32_t cd_num_steps
Number of Gibbs sampling steps performed before each weight update during training. Default value is 1.
Definition at line 372 of file RBM.h.
bool cd_persistent
If true, persistent contrastive divergence is used. Default value is true.
Definition at line 376 of file RBM.h.
bool cd_sample_visible
If true, the visible units are sampled during contrastive divergence. If false, the visible units are not sampled, and their mean values are used instead. Default value is false
Definition at line 382 of file RBM.h.
float64_t gd_learning_rate
gradient descent learning rate, defualt value 0.1
Definition at line 410 of file RBM.h.
float64_t gd_learning_rate_decay
gradient descent learning rate decay learning rate is updated at each iteration i according to: alpha(i)=decay*alpha(i-1) default value is 1.0 (no decay)
Definition at line 417 of file RBM.h.
int32_t gd_mini_batch_size
size of the mini-batch used during gradient descent training, if 0 full-batch training is performed default value is 0
Definition at line 407 of file RBM.h.
float64_t gd_momentum
default value is 0.9
For more details on momentum, see this paper [Sutskever, 2013]
Definition at line 427 of file RBM.h.
SGMatrix hidden_state
States of the hidden units
Definition at line 430 of file RBM.h.
SGIO* io
inherited
io
Definition at line 369 of file SGObject.h.
float64_t l1_coefficient
L1 Regularization coeff, default value is 0.0
Definition at line 388 of file RBM.h.
float64_t l2_coefficient
L2 Regularization coeff, default value is 0.0
Definition at line 385 of file RBM.h.
int32_t m_batch_size
protected
Batch size
Definition at line 443 of file RBM.h.
inherited
parameters wrt which we can compute gradients
Definition at line 384 of file SGObject.h.
uint32_t m_hash
inherited
Hash of parameter values
Definition at line 387 of file SGObject.h.
Parameter* m_model_selection_parameters
inherited
model selection parameters
Definition at line 381 of file SGObject.h.
int32_t m_num_hidden
protected
Number of hidden units
Definition at line 437 of file RBM.h.
int32_t m_num_params
protected
Number of parameters
Definition at line 458 of file RBM.h.
int32_t m_num_visible
protected
Number of visible units
Definition at line 440 of file RBM.h.
int32_t m_num_visible_groups
protected
Number of visible unit groups
Definition at line 446 of file RBM.h.
Parameter* m_parameters
inherited
parameters
Definition at line 378 of file SGObject.h.
SGVector m_params
protected
Parameters
Definition at line 461 of file RBM.h.
CDynamicArray* m_visible_group_sizes
protected
Size of each visible unit group
Definition at line 452 of file RBM.h.
CDynamicArray* m_visible_group_types
protected
Type of each visible unit group
Definition at line 449 of file RBM.h.
CDynamicArray* m_visible_state_offsets
protected
Row offsets for accessing the states of each visible unit groups
Definition at line 455 of file RBM.h.
int32_t max_num_epochs
maximum number of iterations over the training set. defualt value is 1
Definition at line 401 of file RBM.h.
int32_t monitoring_interval
Number of weight updates between each evaluation of the monitoring method. Default value is 10.
Definition at line 393 of file RBM.h.
ERBMMonitoringMethod monitoring_method
Monitoring method
Definition at line 396 of file RBM.h.
Parallel* parallel
inherited
parallel
Definition at line 372 of file SGObject.h.
Version* version
inherited
version
Definition at line 375 of file SGObject.h.
SGMatrix visible_state
States of the visible units
Definition at line 433 of file RBM.h.
The documentation for this class was generated from the following files:
SHOGUN Machine Learning Toolbox - Documentation
|
2016-06-30 06:44:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5874727368354797, "perplexity": 14324.399506045418}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00061-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://web2.0calc.com/questions/inequality_122
|
+0
# Inequality
0
121
2
Solve the inequality 2x - 5 < -x - 12. Give your answer as an interval.
Dec 20, 2021
#1
+179
+1
$$\left(-\infty \:,\:-\frac{7}{3}\right)$$One of these 2 should be correct
Dec 20, 2021
#2
+13724
+1
$$x\in \mathbb R\ |\ x<-\frac{7}{3}$$
asinus Dec 20, 2021
|
2022-06-29 22:28:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9714916348457336, "perplexity": 8811.715679320752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103645173.39/warc/CC-MAIN-20220629211420-20220630001420-00266.warc.gz"}
|
https://www.physicsforums.com/threads/trick-question.787474/
|
# Trick Question?
1. Dec 13, 2014
### Zondrina
1. The problem statement, all variables and given/known data
A fluid flows through a pipe. The flow varies with time. We want to estimate the volume of fluid ($V$, in $L$) that passes through the pipe between time $t = 10 s$ and $t = 14 s$ (i.e. we want to integrate the flow between these times).
The available instrumentation allows us to measure the instantaneous flow rate (in $L/s$) at any three times of our choosing. We might, for example, decide to measure the flow at $t = 10 s$, $t = 12 s$, and $t = 14 s$. At what three times $t_1, t_2, t_3$ would you choose to measure the flow? Keep in mind that we want to produce the best possible estimate.
Now assume the results of the measurements are $M(t_1)$, $M(t_2)$, and $M(t_3)$ in $L/s$. Give an expression for the volume of fluid that passes through the pipe in the period of interest.
2. Relevant equations
3. The attempt at a solution
So we want to estimate $V = \int_{10}^{14} \text{flow}(t) \space dt$.
I think the times $t_1, t_2, t_3$ that have been mentioned in the problem are appropriate, but it obviously can't be that easy right? I can't really see how to break up the interval any better than that.
Then I would want to measure $\frac{d \text{flow}}{dt}$ at $t_1, t_2, t_3$ and call those $M(t_1)$, $M(t_2)$, and $M(t_3)$.
2. Dec 13, 2014
### SteamKing
Staff Emeritus
Unless you are restricted from measuring the flow at all three times, more measurements = better results. If you can provide measurement at equally spaced time intervals, the total flow over the time interval can be computed by applying Simpson's Rule or some other numerical integration method.
3. Dec 13, 2014
### Zondrina
Only three points are allowed I believe.
After doing some research I found my symbolism to be a bit off. So we want $V = \int_{10}^{14} Q(t) \space dt$, where $Q(t) = \frac{dV}{dt}$ is the flow rate.
So if we measure $\frac{dV}{dt}$ (aka $Q(t)$) at the three times mentioned in the problem statement, we obtain $Q(10) = M(10), Q(12) = M(12), Q(14) = M(14)$. These are the $y$ values in this case.
So an expression for the volume of fluid is given by:
$$V = \int_{10}^{14} Q(t) \space dt = \frac{2}{3} \left[ M(10) + 4M(12) + M(14) \right]$$
Using Simpson's 1/3 rule with $h = 2$. Alternatively:
$$V = \int_{10}^{14} Q(t) \space dt = \left[ M(10) + 2M(12) + M(14) \right]$$
Using composite trapezoidal integration with $h = 2$.
I am unsure this is the best estimate, but for now it seems like it might be.
4. Dec 14, 2014
### SteamKing
Staff Emeritus
The simpson's rule will probably give you a more accurate result than the trapezoidal rule. With only three points, it's rather easy to check the flow with both.
|
2017-10-17 04:45:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8427321910858154, "perplexity": 253.11298846792673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820700.4/warc/CC-MAIN-20171017033641-20171017053641-00262.warc.gz"}
|
https://www.semanticscholar.org/paper/Number-of-visits-in-arbitrary-sets-for-dynamics-Gallo-Haydn/64806d36f5f5760a1eb322ddba54bb9d4982bdae
|
Corpus ID: 236469107
# Number of visits in arbitrary sets for $\phi$-mixing dynamics
@inproceedings{Gallo2021NumberOV,
title={Number of visits in arbitrary sets for \$\phi\$-mixing dynamics},
author={Sandro Gallo and Nicolai T. A. Haydn and Sandro Vaienti},
year={2021}
}
• Published 2021
• Mathematics
It is well-known that, for sufficiently mixing dynamical systems, the number of visits to balls and cylinders of vanishing measure is approximately Poisson compound distributed in the Kac scaling. Here we extend this kind of results when the target set is an arbitrary set with vanishing measure in the case of φ-mixing systems. The error of approximation in total variation is derived using Stein-Chen method. An important part of the paper is dedicated to examples to illustrate the assumptions… Expand
#### References
SHOWING 1-10 OF 55 REFERENCES
Limiting distribution and error terms for the number of visits to balls in non-uniformly hyperbolic dynamical systems
• Mathematics
• 2014
We show that for systems that allow a Young tower construction with polynomially decaying correlations the return times to metric balls are in the limit Poisson distributed. We also provide errorExpand
Poisson approximation for the number of visits to balls in non-uniformly hyperbolic dynamical systems
• Mathematics, Physics
• Ergodic Theory and Dynamical Systems
• 2012
Abstract We study the number of visits to balls Br(x), up to time t/μ(Br(x)), for a class of non-uniformly hyperbolic dynamical systems, where μ is the Sinai–Ruelle–Bowen measure. Outside a set ofExpand
Spatio-temporal Poisson processes for visits to small sets
• Mathematics
• 2018
For many measure preserving dynamical systems $(\Omega,T,m)$ the successive hitting times to a small set is well approximated by a Poisson process on the real line. In this work we define a newExpand
Return times distribution for Markov towers with decay of correlations
• Mathematics
• 2010
In this paper we prove two results. First we show that dynamical systems with a $\phi$-mixing measure have in the limit Poisson distributed return times almost everywhere. We use the Chen-SteinExpand
Poisson law for some non-uniformly hyperbolic dynamical systems with polynomial rate of mixing
• Mathematics
• Ergodic Theory and Dynamical Systems
• 2015
We consider some non-uniformly hyperbolic invertible dynamical systems which are modeled by a Gibbs–Markov–Young tower. We assume a polynomial tail for the inducing time and a polynomial control ofExpand
Compound Poisson Approximation for Nonnegative Random Variables Via Stein's Method
• Mathematics
• 1992
The aim of this paper is to extend Stein's method to a compound Poisson distribution setting. The compound Poisson distributions of concern here are those of the form POIS$(\nu)$, where $\nu$ is aExpand
Limiting Entry and Return Times Distribution for Arbitrary Null Sets
• Mathematics
• 2020
We describe an approach that allows us to deduce the limiting return times distribution for arbitrary sets to be compound Poisson distributed. We establish a relation between the limiting returnExpand
Rare event process and entry times distribution for arbitrary null sets on compact manifolds
We establish the general equivalence between rare event process for arbitrary continuous functions whose maximal values are achieved on non-trivial sets, and the entry times distribution forExpand
Decay of Correlations for Non-Hölderian Dynamics. A Coupling Approach
• Mathematics, Physics
• 1998
We present an upper bound on the mixing rate of the equilibrium state of a dynamical system defined by the one-sided shift and a non Holder potential of summable variations. The bound follows from anExpand
Convergence of Marked Point Processes of Excesses for Dynamical Systems
• Mathematics, Physics
• 2015
We consider stochastic processes arising from dynamical systems simply by evaluating an observable function along the orbits of the system and study marked point processes associated to extremalExpand
|
2021-10-15 21:37:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8048710823059082, "perplexity": 964.0348942765027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00529.warc.gz"}
|
https://quantumcomputing.stackexchange.com/questions/8652/how-does-a-single-qubit-gate-affect-other-qubits
|
# How does a single-qubit gate affect other qubits?
An instructional quantum computing article I'm reading (How the quantum search algorithm works) states that the following circuit takes $$\vert x\rangle\vert 0\rangle$$ to $$−\vert x\rangle\vert 0\rangle$$, when $$\vert x\rangle$$ is anything other than $$\vert 00\rangle$$.
My question is: how can an operation (-Z gate) on the target qubit (the bottom one) affect the control qubits (the top 4 ones)?
I would expect the result of this circuit, when $$|x\rangle$$ is anything other than $$\vert 0000\rangle$$, to be $$\vert x\rangle(−\vert 0\rangle)$$. I understand that, algebraically, $$(x)(-y)$$ is the same as $$-xy$$, but intuitively, looking at this circuit, I don't see how those top 4 qubits would be affected by something happening to the bottom qubit. (I know, intuition doesn't have much place in quantum mechanics.) My hunch is that entanglement is at play.
• Hi! Welcome to QCSE. How are you interpreting $\vert x\rangle -\vert 0\rangle$? If $\vert x\rangle$ is four qubits, and (the bottom) qubit $\vert 0\rangle$ is only one qubit, then it's not clear that you can subtract only one qubit from four. Nov 1 '19 at 21:08
• Hi! (Thanks for the edits.) I'm not interpreting it as subtracting |0⟩ from |𝑥⟩. I'm interpreting it as -|0⟩ "next to" |𝑥⟩. And what "next to" means, I have no idea! But it appears, from what I infer from the article, that |𝑥⟩−|0⟩ is the same as −|𝑥⟩|0⟩, so I assumed these objects follow the same rules as terms that are multiplied together (i.e. (𝑥)(−𝑦) is the same as −𝑥𝑦). Nov 1 '19 at 21:27
The phase of -1 generated on the ancilla is just a global phase. You can shift it to any qubit without affecting the statistics of the system. Nothing to do with entanglement.
Quantum mechanics is a mathematical framework that describes reality insofar that it predicts the correct observable statistics of a system. Shifting around a global phase factor from one qubit to another doesn't cause any deviation in the observable statistics and thus, they're all equivalent quantum states of the 5 qubits, mathematically as well as physically.
## $$-(|x\rangle\otimes |0\rangle) \equiv |x\rangle \otimes (-|0\rangle) \equiv (-|x\rangle)\otimes|0\rangle$$
As you're beginning to learn quantum computing you'll soon also hit another conceptual block.
Though your question doesn't ask about it, I'll briefly point out a couple of things. For some reason people seem to believe that state of control qubits should not change when a controlled-$$U$$ gate is applied. The actual statement is that only the standard basis states $$|\{0, 1\}^c\rangle$$ ($$c$$ being the number of control qubits) remain unaffected by controlled-$$U$$ gates.
If the control qubits are in a state that is different from one of the standard basis states, then theoretically their state could very well be affected. Moreover, as Niel de Beaudrap beautifully illustrates here, you should not attach too much significance to the standard basis!
The phenomenon is known as phase kickback mechanism:
If we apply a controlled-$$U$$ gate to $$|\psi_k\rangle$$ (an eigenstate of $$U$$; $$e^{i\phi_k}$$ being the eigenvalue), $$|\psi_k\rangle$$ is unchanged, and $$|0\rangle\langle 0| + e^{i\phi_k}|1\rangle\langle 1|$$ is applied to the control qubit.
For a detailed explanation, check DaftWullie's answer and the subsequent discussion in the comments.
This is not an attempt to answer the question directly, but might help with the intuition. They key point is that multi-qubit gates can propagate the effect of a single-qubit rotation to other qubits. So, there's the technicality of what effect is propagated that I'll leave to other answers, but just to convince you that it can propagate:
Take a very simple circuit of two qubits. We apply the sequence SWAP - single-qubit rotation on qubit 2 - SWAP. I hope you'll agree that the net effect of this is simply that the single-qubit rotation is actually applied to the first qubit, not the second. So, it's all about those multi-qubit gates...
|
2022-01-27 11:54:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6898186206817627, "perplexity": 619.5324417717733}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305260.61/warc/CC-MAIN-20220127103059-20220127133059-00607.warc.gz"}
|
http://tpd.ihep.cas.cn/xshd/xsbg/201701/t20170109_359585.html
|
收藏本站 | 联系我们 | 高能所主站 | 中国科学院 | English
首 页 理论室概况 科研队伍 学术活动 人才培养 科研成果 访问须知 人才需求 下载专区 TPCSF CFHEP
当前位置:首页>学术活动>学术报告
学术活动
学术报告 学术会议 理论室青年论坛
学术报告
[1.18]Solving large-N gauge theory by integrability
## Seminar
Title: Solving large-N gauge theory by integrability
Speaker: Dr. Yun-Feng Jiang (江云峰)(ETH Zürich)
Time: 2:30PM, Jan. 18th 2017 (Wednesday)
Place: Theoretical Physics Division319
Abstract: Gauge theories describe fundamental interactions in nature but are hard to solve especially in strongly coupled regime. On the other hand, integrable models which usually appear in low dimensional many-body systems preserve infinitely many conserved charges and can be solved exactly. It is realized in recent years that certain large N gauge theories in 4 dimensions can be seen as an integrable system which raises the hope to solve them exactly. In this talk, I will give an overview of the beautiful relation between large N gauge theory and integrability. In particular, I will explain in detail how to compute correlation functions of local gauge invariant operators of the planar $\mathcal{N}=4$ Super-Yang-Mills theory using integrability. I will also comment on the computation of other interesting quantities such as Wilson loops and scattering amplitudes and generalizations to other theories.
Vidyo
PIN: 319319 (会议室短号:00928729 ;使用polycom或者电话连接时使用)
版权所有 © 中国科学院高能物理研究所理论物理研究室 京ICP备05002790号 地址:北京市石景山区玉泉路19号乙院918-4 邮编:100049
|
2017-05-27 02:18:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5799162983894348, "perplexity": 1550.5271471413266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608765.79/warc/CC-MAIN-20170527021224-20170527041224-00044.warc.gz"}
|
https://input-output-hk.github.io/ouroboros-network/ouroboros-consensus-test/Test-Util-FS-Sim-Error.html
|
ouroboros-consensus-test-0.1.0.0: Tests of the consensus layer
Test.Util.FS.Sim.Error
Description
HasFS instance wrapping SimFS that generates errors, suitable for testing error handling.
Synopsis
Introduce possibility of errors
TODO: Lenses would be nice for the setters
runSimErrorFS ∷ (MonadSTM m, MonadThrow m) ⇒ MockFSErrors → (StrictTVar m ErrorsHasFS m HandleMock → m a) → m (a, MockFS) Source #
Runs a computation provided an Errors and an initial MockFS, producing a result and the final state of the filesystem.
withErrorsMonadSTM m ⇒ StrictTVar m ErrorsErrors → m a → m a Source #
Execute the next action using the given Errors. After the action is finished, the previous Errors are restored.
Streams
An ErrorStream is a possibly infinite Stream of (Maybe) FsErrorTypes.
Nothing indicates that there is no error.
Each time the ErrorStream is used (see runErrorStream), the first element (Nothing in case the list is empty) is taken from the list and an ErrorStream with the remainder of the list is returned. The first element represents whether an error should be returned or not.
An FsError consists of a number of fields: fsErrorType, a fsErrorPath, etc. Only the first fields is interesting. Therefore, we only generate the FsErrorType. The FsErrorType will be used to construct the actual FsError.
ErrorStream for hGetSome: an error or a partial get.
ErrorStream for hPutSome: an error and possibly some corruption, or a partial write.
newtype Stream a Source #
A Stream is a possibly infinite stream of Maybe as.
Constructors
Stream FieldsgetStream ∷ [Maybe a]
Instances
Instances details
Source # Instance detailsDefined in Test.Util.FS.Sim.Error Methodsfmap ∷ (a → b) → Stream a → Stream b Source #(<\$) ∷ a → Stream b → Stream a Source # Show a ⇒ Show (Stream a) Source # Instance detailsDefined in Test.Util.FS.Sim.Error MethodsshowList ∷ [Stream a] → ShowS Source # Source # Instance detailsDefined in Test.Util.FS.Sim.Error Methods(<>) ∷ Stream a → Stream a → Stream a Source #stimes ∷ Integral b ⇒ b → Stream a → Stream a Source # Monoid (Stream a) Source # Instance detailsDefined in Test.Util.FS.Sim.Error Methodsmappend ∷ Stream a → Stream a → Stream a Source #mconcat ∷ [Stream a] → Stream a Source #
always ∷ a → Stream a Source #
Make a Stream that always generates the given a.
mkStream ∷ [Maybe a] → Stream a Source #
Create a Stream based on the given possibly infinite list of Maybe as.
mkStreamGenInt → Gen a → Gen (Stream a) Source #
Make a Stream generator based on a a generator.
The generator generates a finite stream of 10 elements, where each element has a chance of being either Nothing or an element generated with the given a generator (wrapped in a Just).
The first argument is the likelihood (as used by frequency) of a Just where Nothing has likelihood 2.
Return True if the stream is empty.
A stream consisting of only Nothings (even if it is only one) is not considered to be empty.
runStreamStream a → (Maybe a, Stream a) Source #
Advance the Stream. Return the Maybe a and the remaining Stream.
newtype Partial Source #
Given a Partial p where p > 0, we do the following to make a call to hGetSome or hPutSome partial:
• hGetSome: we subtract p from the number of requested bytes. If that would result in 0 requested bytes or less, we request 1 byte. If the number of requested bytes was already 0, leave it untouched, as we can't simulate a partial read in this case.
• hPutSome: we drop the last p bytes from the bytestring. If that would result in an empty bytestring, just take the first byte of the bytestring. If the bytestring was already empty, leave it untouched, as we can't simulate a partial write in this case.
Constructors
Partial Word64
Instances
Instances details
Source # Instance detailsDefined in Test.Util.FS.Sim.Error Methods Arbitrary Partial Source # Instance detailsDefined in Test.Util.FS.Sim.Error Methodsarbitrary ∷ Gen Partialshrink ∷ Partial → [Partial]
Generating corruption for hPutSome
Model possible corruptions that could happen to a hPutSome call.
Constructors
SubstituteWithJunk Blob The blob to write is substituted with corrupt junk PartialWrite Partial Only perform the write partially
Instances
Instances details
Source # Instance detailsDefined in Test.Util.FS.Sim.Error Methods Arbitrary PutCorruption Source # Instance detailsDefined in Test.Util.FS.Sim.Error Methods
Apply the PutCorruption to the ByteString.
Error streams for HasFS
data Errors Source #
Error streams for the methods of the HasFS type class.
An ErrorStream is provided for each method of the HasFS type class. This ErrorStream will be used to generate potential errors that will be thrown by the corresponding method.
For hPutSome, an ErrorStreamWithCorruption is provided to simulate corruption.
An Errors is used in conjunction with SimErrorFS, which is a layer on top of SimFS that simulates methods throwing FsErrors.
Instances
Instances details
Source # Instance detailsDefined in Test.Util.FS.Sim.Error Methods Source # Instance detailsDefined in Test.Util.FS.Sim.Error Methodsstimes ∷ Integral b ⇒ b → Errors → Errors Source # Source # Instance detailsDefined in Test.Util.FS.Sim.Error Methods Arbitrary Errors Source # Instance detailsDefined in Test.Util.FS.Sim.Error Methodsarbitrary ∷ Gen Errorsshrink ∷ Errors → [Errors]
Return True if all streams are empty (null).
Arguments
∷ Bool True -> generate partial writes → Bool True -> generate SubstituteWithJunk corruptions → Gen Errors
Generator for Errors that allows some things to be disabled.
This is needed by the VolatileDB state machine tests, which try to predict what should happen based on the Errors, which is too complex sometimes.
Use the given ErrorStream for each field/method. No corruption of hPutSome.
|
2022-06-27 00:29:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19655801355838776, "perplexity": 8289.419262925532}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103322581.16/warc/CC-MAIN-20220626222503-20220627012503-00239.warc.gz"}
|
http://www.chegg.com/homework-help/questions-and-answers/solar-panel-surface-8-m-5-m-deliver-25-15-v-full-sun-intensity-800-w-m--compute-efficiency-q2028171
|
A solar panel with surface .8 m × .5 m. can deliver 2.5[A] at 15[V] in full sun (with intensity 800[W/m²]).
a) Compute the efficiency at which this solar panel converts solar energy into electrical.
b) Determine the resistance of R1 ... see diagram.
{Note that it is an active device ("sensor"); in typical use, the current is "almost linear" with light intensity, while voltage remains "almost constant".}
I know the efficiency is 11.7 % and I got R=6 ohms for my answer to part B but it says R is wrong. The other hint it gave me is that there is other resistance in series with R1.
|
2014-08-21 19:31:38
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8410306572914124, "perplexity": 1816.0090428997144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500820886.32/warc/CC-MAIN-20140820021340-00166-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/statistics-probability/elementary-statistics-12th-edition/chapter-9-inferences-from-two-samples-9-3-two-means-independent-samples-basic-skills-and-concepts-page-466/24
|
# Chapter 9 - Inferences from Two Samples - 9-3 Two Means: Independent Samples - Basic Skills and Concepts - Page 466: 24
Fail to reject the null hypothesis.
#### Work Step by Step
Null hypothesis:$\mu_1=\mu_2$, alternative hypothesis:$\mu_1\ne\mu_2$. Hence the value of the test statistic: $t=\frac{(\overline{x_1}-\overline{x_2})-(\mu_1-\mu_2)}{\sqrt{s_1^2/n_1+s_2^2/n_2}}=\frac{(0.8168-0.7848)-(0)}{\sqrt{0.0075^2/36+0.0044^2/36}}=22.081.$ The degree of freedom: $min(n_1-1,n_2-1)=min(36-1,36-1)=35.$ The corresponding P-value by using the table: p is more than 0.2. If the P-value is less than $\alpha$, which is the significance level, then this means the rejection of the null hypothesis. Hence:P is more than $\alpha=0.05$, because it is more than 0.2, hence we fail to reject the null hypothesis. Hence there doesn't seem to be difference in the mean weights.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2018-09-23 22:33:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8823619484901428, "perplexity": 561.1171113680706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159820.67/warc/CC-MAIN-20180923212605-20180923233005-00291.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/appendix-d-trigonometry-d-exercises-page-a32/14
|
# Appendix D - Trigonometry - D Exercises - Page A32: 14
$4\pi$ centimeters
#### Work Step by Step
We use the formula $s=θr$ $θ$ must be in radians. It is given in degrees. We convert from radians to degrees: $72°\times\frac{\pi}{180}=\frac{2\pi}{5}$ Plug in given values and simplify: $s=\frac{2\pi}{5}(10)=4\pi$ centimeters
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2019-11-17 09:40:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7904466986656189, "perplexity": 1699.9777962571673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668910.63/warc/CC-MAIN-20191117091944-20191117115944-00102.warc.gz"}
|
http://finmining.com/85gv5/semantic-segmentation-of-images-90f3b4
|
Preview the datastore to explore the data. The full network, as shown below, is trained according to a pixel-wise cross entropy loss. As shown in the figure below, the values used for a dilated convolution are spaced apart according to some specified dilation rate. This example uses a high-resolution multispectral data set to train the network [1]. Image segmentation is a computer vision task in which we label specific regions of an image according to what's being shown. Note: The original architecture introduces a decrease in resolution due to the use of valid padding. You can apply segmentation overlay on the image if you want to. For the case of evaluating a Dice coefficient on predicted segmentation masks, we can approximate ${\left| {A \cap B} \right|}$ as the element-wise multiplication between the prediction and target mask, and then sum the resulting matrix. Measure the global accuracy of the semantic segmentation by using the evaluateSemanticSegmentation function. We can easily inspect a target by overlaying it onto the observation. The labeled images contain the ground truth data for the segmentation, with each pixel assigned to one of the 18 classes. Deep Learning, Semantic Segmentation, and Detection, 'http://www.cis.rit.edu/~rmk6217/rit18_data.mat', 'https://www.mathworks.com/supportfiles/vision/data/multispectralUnet.mat', 'RGB Component of Training Image (Left), Validation Image (Center), and Test Image (Right)', 'IR Channels 1 (Left), 2, (Center), and 3 (Right) of Training Image', 'Mask of Training Image (Left), Validation Image (Center), and Test Image (Right)', 'The percentage of vegetation cover is %3.2f%%. You can now use the U-Net to semantically segment the multispectral image. segment_image.segmentAsAde20k("sample.jpg", output_image_name = "image_new.jpg", overlay = True) To perform the forward pass on the trained network, use the helper function, segmentImage, with the validation data set. When considering the per-class pixel accuracy we're essentially evaluating a binary mask; a true positive represents a pixel that is correctly predicted to belong to the given class (according to the target mask) whereas a true negative represents a pixel that is correctly id… Illustration of common failures modes for semantic segmentation as they relate to inference scale. Due to availability of large, annotated data sets (e.g. Visualize the segmented image with the noise removed. Training Convolutional Neural Networks (CNNs) for very high resolution images requires a large quantity of high-quality pixel-level annotations, which is extremely labor- and time-consuming to produce. Image semantic segmentation is a challenge recently takled by end-to-end deep neural networks. average or max pooling), "unpooling" operations upsample the resolution by distributing a single value into a higher resolution. "High-Resolution Multispectral Dataset for Semantic Segmentation." This loss weighting scheme helped their U-Net model segment cells in biomedical images in a discontinuous fashion such that individual cells may be easily identified within the binary segmentation map. These layers are followed by a series of convolutional layers interspersed with upsampling operators, successively increasing the resolution of the input image [2]. Web browsers do not support MATLAB commands. A Fully Conventional Network functions are created through a map that transforms the pixels to pixels. Recall that this approach is more desirable than increasing the filter size due to the parameter inefficiency of large filters (discussed here in Section 3.1). Environmental agencies track deforestation to assess and quantify the environmental and ecological health of a region. Save the training data as a MAT file and the training labels as a PNG file. In this approach, a deep convolutional neural network or DCNN was trained with raw and labeled images and used for semantic image segmentation. Calculate the percentage of vegetation cover by dividing the number of vegetation pixels by the number of valid pixels. For instance, a street scene would be segmented by “pedestrians,” “bikes,” “vehicles,” “sidewalks,” and so on. Display the color component of the training, validation, and test images as a montage. where ${\left| {A \cap B} \right|}$ represents the common elements between sets A and B, and $\left| A \right|$ represents the number of elements in set A (and likewise for set B). We pro-pose a novel image region labeling method which augments CRF formulation with hard mutual exclusion (mutex) con-straints. CNNs are mainly used for computer vision to perform tasks like image classification, face recognition, identifying and classifying everyday objects, and image processing in robots and autonomous vehicles. This function is attached to the example as a supporting file. As I discussed in my post on common convolutional network architectures, there exist a number of more advanced "blocks" that can be substituted in for stacked convolutional layers. Two types of image segmentation exist: Semantic segmentation. AlexNet) to serve as the encoder module of the network, appending a decoder module with transpose convolutional layers to upsample the coarse feature maps into a full-resolution segmentation map. In order to quantify $\left| A \right|$ and $\left| B \right|$, some researchers use the simple sum whereas other researchers prefer to use the squared sum for this calculation. Semantic segmentation—classifies all the pixels of an image into meaningful classes of objects. This example shows how to train a U-Net convolutional neural network to perform semantic segmentation of a multispectral image with seven channels: three color channels, three near-infrared channels, and a mask. For instance, you could isolate all the pixels associated with a cat and color them green. One challenge is differentiating classes with similar visual characteristics, such as trying to classify a green pixel as grass, shrubbery, or tree. It is also used for video analysis and classification, semantic parsing, automatic caption generation, search query retrieval, sentence classification, and much more. You can use the helper MAT file reader, matReader, that extracts the first six channels from the training data and omits the last channel containing the mask. The saved image after segmentation, the objects in the image are segmented. Fig 2: Credits to Jeremy Jordan’s blog. A CUDA-capable NVIDIA™ GPU with compute capability 3.0 or higher is highly recommended for training. Unfortunately, this tends to produce a checkerboard artifact in the output and is undesirable, so it's best to ensure that your filter size does not produce an overlap. (Source). More concretely, they propose the U-Net architecture which "consists of a contracting path to capture context and a symmetric expanding path that enables precise localization." is coming towards us. Semantic segmentation is an essential area of research in computer vision for image analysis task. Based on your location, we recommend that you select: . In order to formulate a loss function which can be minimized, we'll simply use $1 - Dice$. Some example benchmarks for this task are Cityscapes, PASCAL VOC and ADE20K. Combining fine layers and coarse layers lets the model make local predictions that respect global structure. Because the MAT file format is a nonstandard image format, you must use a MAT file reader to enable reading the image data. Whereas pooling operations downsample the resolution by summarizing a local area with a single value (ie. These labels could include people, cars, flowers, trees, buildings, roads, animals, and so on. Because we're predicting for every pixel in the image, this task is commonly referred to as dense prediction. However, for the dense prediction task of image segmentation, it's not immediately clear what counts as a "true positive&, In Q4 of 2017, I made the decision to walk down the entrepreneurial path and dedicate a full-time effort towards launching a startup venture. This function is attached to the example as a supporting file. proposed the use of dense blocks, still following a U-Net structure, arguing that the "characteristics of DenseNets make them a very good fit for semantic segmentation as they naturally induce skip connections and multi-scale supervision." Download the MAT-file version of the data set using the downloadHamlinBeachMSIData helper function. I don't have the practical experience to know which performs better empirically over a wide range of tasks, so I'll leave you to try them both and see which works better. The image set was captured using a drone over the Hamlin Beach State Park, NY. Recall that for deep convolutional networks, earlier layers tend to learn low-level concepts while later layers develop more high-level (and specialized) feature mappings. You clicked a link that corresponds to this MATLAB command: Run the command by entering it in the MATLAB Command Window. Some architectures swap out the last few pooling layers for dilated convolutions with successively higher dilation rates to maintain the same field of view while preventing loss of spatial detail. What’s the first thing you do when you’re attempting to cross the road? The FC-DenseNet103 model acheives state of the art results (Oct 2017) on the CamVid dataset. Categories like “vehicles” are split into “cars,” “motorcycles,” “buses,” and so on—instance segmentation … Significant improvements were made by Long et al. If you choose to train the U-Net network, use of a CUDA-capable NVIDIA™ GPU with compute capability 3.0 or higher is highly recommended (requires Parallel Computing Toolbox™). The network analyzes the information in the image regions to identify different characteristics, which are then used selectively through switching network branches. 2015. evaluateSemanticSegmentation | histeq | imageDatastore | pixelLabelDatastore | randomPatchExtractionDatastore | semanticseg | unetLayers | trainingOptions (Deep Learning Toolbox) | trainNetwork (Deep Learning Toolbox). Expanding on this, Jegou et al. 2017. Because our target mask is binary, we effectively zero-out any pixels from our prediction which are not "activated" in the target mask. A simplified 1D example of upsampling through a transpose operation. The data contains labeled training, validation, and test sets, with 18 object class labels. Training a deep network is time-consuming. (Source). In the second row, the large road / divider region is better segmented at lower resolution (0.5x). Semantic segmentation aids machines to detect and classify the objects in an image at a single class. Depth data is used to identify objects existing in multiple image regions. Common datasets and segmentation competitions, common convolutional network architectures, BDD100K: A Large-scale Diverse Driving Video Database, Cambridge-driving Labeled Video Database (CamVid), Fully Convolutional Networks for Semantic Segmentation, U-Net: Convolutional Networks for Biomedical Image Segmentation, The Importance of Skip Connections in Biomedical Image Segmentation, Multi-Scale Context Aggregation by Dilated Convolutions, DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs, Rethinking Atrous Convolution for Semantic Image Segmentation, Evaluation of Deep Learning Strategies for Nucleus Segmentation in Fluorescence Images, Stanford CS231n: Detection and Segmentation, Mat Kelcey's (Twitter Famous) Bee Detector, Semantic Image Segmentation with DeepLab in TensorFlow, Going beyond the bounding box with semantic segmentation, Lyft Perception Challenge: 4th place solution, labelme: Image Polygonal Annotation with Python. Code to implement semantic segmentation: Create a pixelLabelDatastore for the segmentation results and the ground truth labels. Semantic-segmentation. Below, I've listed a number of common datasets that researchers use to train new models and benchmark against the state of the art. For example, the trees near the center of the second channel image show more detail than the trees in the other two channels. In the first row, the thin posts are inconsistently segmented in the scaled down (0.5x) image, but better predicted in the scaled-up (2.0x) image. These skip connections from earlier layers in the network (prior to a downsampling operation) should provide the necessary detail in order to reconstruct accurate shapes for segmentation boundaries. Semantic Segmentation means not only assigning a semantic label to the whole image as in classification tasks. To reshape the data so that the channels are in the third dimension, use the helper function, switchChannelsToThirdPlane. This example shows how to use deep-learning-based semantic segmentation techniques to calculate the percentage vegetation cover in a region from a set of multispectral images. (U-Net paper) discuss a loss weighting scheme for each pixel such that there is a higher weight at the border of segmented objects. Overlay the segmented image on the histogram-equalized RGB validation image. The pixel accuracy is commonly reported for each class separately as well as globally across all classes. The standard U-Net model consists of a series of convolution operations for each "block" in the architecture. Introduction. Overlay the labels on the histogram-equalized RGB training image. See all 47 posts Long et al. Objects shown in an image are grouped based on defined categories. 10 min read, 19 Aug 2020 – It appears as if the usefulness (and type) of data augmentation depends on the problem domain. This example modifies the U-Net to use zero-padding in the convolutions, so that the input and the output to the convolutions have the same size. Thus, we could alleviate computational burden by periodically downsampling our feature maps through pooling or strided convolutions (ie. Download the xception model from here. 01/10/2021 ∙ by Yuansheng Hua, et al. Create a randomPatchExtractionDatastore from the image datastore and the pixel label datastore. compressing the spatial resolution) without concern. This datastore extracts multiple corresponding random patches from an image datastore and pixel label datastore that contain ground truth images and pixel label data. Train the network using stochastic gradient descent with momentum (SGDM) optimization. It helps the visual perception model to learn with better accuracy for right predictions when used in real-life. Semantic Segmentation of Remote Sensing Images with Sparse Annotations. Add a colorbar to the image. The final labeling result must satisfy It is a form of pixel-level prediction because each pixel in an image is classified according to a category. This example uses a variation of the U-Net network. [12], [15]), Deep Learning approaches quickly became the state-of-the-art in semantic segmentation. … Semantic segmentation, or image segmentation, is the task of clustering parts of an image together which belong to the same object class. The proposed 3D-DenseUNet-569 is a fully 3D semantic segmentation model with a significantly deeper network and lower trainable parameters. A labeled image is an image where every pixel has been assigned a categorical label. This function is attached to the example as a supporting file. Whereas a typical convolution operation will take the dot product of the values currently in the filter's view and produce a single value for the corresponding output position, a transpose convolution essentially does the opposite. It is the core research paper that the ‘Deep Learning for Semantic Segmentation of Agricultural Imagery’ proposal was built around. Perform post image processing to remove noise and stray pixels. The paper's authors propose adapting existing, well-studied image classification networks (eg. This residual block introduces short skip connections (within the block) alongside the existing long skip connections (between the corresponding feature maps of encoder and decoder modules) found in the standard U-Net structure. Specify the hyperparameter settings for SGDM by using the trainingOptions (Deep Learning Toolbox) function. Environmental agencies track deforestation to assess and quantify the environmental and ecological health of a region. Or DCNN was trained with raw and labeled images contain the ground truth labels forest! Multiple corresponding random patches from an image is classified according to a class model What..., comparing the class predictions ( depth-wise pixel vector ) to our one-hot encoded vector. Training successfully ( mutex ) con-straints of research in computer vision for image analysis task pixels! Symmetric shape like the letter U role in labelling the images information about each pixel the! '', output_image_name = image_new.jpg '', output_image_name = image_new.jpg '', overlay = True ) Groups image. Reading the image, and so on in U-Net, the large road / region... A link that corresponds to this MATLAB command Window raw and labeled images contain the truth. You do when you ’ re attempting to cross the road to create a pixelLabelDatastore to store label! To identify objects existing in multiple image regions strategy to cope with imbalanced labels for! Are three types of semantic segmentation conducts pixel-level classification of the image to one of the decoder.. Image after segmentation, we could alleviate computational burden by periodically downsampling feature! Approaches that we can use to upsample the resolution of the training, validation, and where in the Two! Report the percent of pixels in the image file format is a mask that indicates valid... Our approach can make use of valid pixels by the mask for the task of image understanding, semantic can. ) to facilitate semantic segmentation model with a class the large road / region! On image patches using the downloadHamlinBeachMSIData helper function distributing a single value into a higher resolution the... Formulate a loss function for the segmentation, is trained according to a pixel-wise cross entropy loss by distributing single. Must use a MAT file and the training labels as a supporting file reflection the. Sensing images is benecial to detect objects and understand the scene in earth observation percentage of pixels! Look left and right, take stock of the U-Net to semantically segment the multispectral image in imageDatastore! For the segmentation. accuracy is commonly referred to as dense prediction, all. Final score sites are not optimized for visits from your location begin by storing the training as... If the usefulness ( and type ) of data to the same object.... Is the task of image understanding, semantic segmentation classifies every pixel in the dataset example. The dataset leading developer of mathematical computing software for engineers and scientists, successively decreasing the resolution of feature. 1 denotes perfect and complete overlap T. Brox abstract semantic segmentation is a form of pixel-level prediction each. Learning for semantic segmentation is a computer vision task in which we label specific regions of an image meaningful. More detail than the trees near the center of the U-Net network assess and quantify the and. No ’ till a few preselected hyperparameters these channels correspond to the whole image whereas semantic:... A modified version of this example exists on your GPU hardware few preselected hyperparameters? the answer an... Interpretable ” and correspond to real-world categories developer of mathematical computing software engineers! Switching network branches goal of this images to use it in different application is a common technique to prevent out! Value ( ie that the short skip connections the letter U 18 class... Row, the segmentation, or image segmentation exist: semantic segmentation. for semantic segmentation important. Be very popular and has been assigned a categorical label for learning augmentation depends on the problem of segmentation... Model acheives state of the segmentation, with the validation data each contains. Training and allow for faster convergence when training and allow for deeper models to very! Benchmarks for this dataset using the evaluateSemanticSegmentation function the environmental and ecological health of a block... A dense block is passed along in the image if you keep the doTraining parameter in the architecture (.... Preventing the network, set the doTraining parameter in the other Two channels of an image meaningful. Random patches from an image into meaningful classes of objects reduced spatial resolution numChannels-by-width-by-height arrays our one-hot target. Three histogram-equalized channels of the applications of deep learning approaches quickly became the state-of-the-art in semantic tasks! Which augments CRF formulation with hard mutual exclusion ( mutex ) con-straints linking. From training successfully 18 labeled regions the applications of deep learning Toolbox ) function percent of pixels the... There semantic segmentation of images three types of image segmentation is an image at a value..., cars, flowers, trees, buildings, roads, animals and... Color component of the choroid segmentation results and the training, validation, make! Pretrained version of U-Net for this dataset using the semanticseg function attention and... Is passed along in the image to a pixel-wise cross entropy loss the large road divider! Real shape of the subjects about 20 hours on semantic segmentation of images NVIDIA™ Titan X and can take even depending. Detecting, for every pixel in the architecture vision for image analysis task pixels. 12 ], [ 15 ] ), unpooling '' operations upsample the by! Rise and advancements in computer vision task in which we label specific of! Supporting file … What ’ s the first thing you do when you ’ re attempting to cross the,! Trees in the MATLAB command: run the entire example without having to wait training... Partitioning an image are segmented in order to counteract a class imbalance in. The dataset three histogram-equalized channels of the training data to the whole image semantic... Label patches containing the 18 classes iteration of the network are useful for a dilated are! The architecture report the percent of pixels in the third dimension, use the U-Net semantically. Provide alternative approach towards gaining a wide field of view while preserving the full spatial dimension and 1st channels!, switchChannelsToThirdPlane memory for semantic segmentation of images images and to effectively increase the difficulty of semantic segmentations that play a role. Channels of the validation data and classify the parts of images related to the task of semantic that! Image processing to remove salt-and-pepper noise from the fact that the network analyzes the in. Real-Time segmented road scene for autonomous driving set the doTraining parameter in the image an segmentation... As the process of linking each pixel in an image according to What 's in this work segmentation aids to... Image if you keep the doTraining parameter in the following code as,. Remote sensing images is benecial to detect and classify the objects in an image into segments! … Two types of image understanding, semantic segmentation in camera images refers to the whole image semantic. Spaced apart according to a pixel-wise cross entropy loss implement semantic segmentation. method which CRF! Requires to outline the objects, and partitioning an image into multiple segments effectively. … What ’ s blog classes are “ semantically interpretable ” and correspond real-world! The size of the same class in earth observation fully Conventional network functions are created through a transpose.! A drone over the Hamlin Beach state Park, NY the screen, equalize their histograms using. Titan X and can take even longer depending on your location, we simply. No ’ till a few years back different components of the subjects, known as semantic segmentation involves each... A target by overlaying it onto the observation is commonly referred to as dense prediction a common technique prevent... The state-of-the-art in semantic segmentation, usually leading to decreased semantic segmentation, where the padding values are simply together. Use $semantic segmentation of images - Dice$ and a … Two types of semantic segmentations that play a major in. Link that corresponds to this MATLAB command: run the entire example without having to for... 1 - Dice \$: What is semantic segmentation. training takes about 20 on. Must satisfy as one basic method of image segmentation is tracking deforestation, which are then selectively... ( e.g also find the total number of valid padding networks always failed to obtain an accuracy map... An irregular shape that overlap with the real shape of the segmentation. model acheives state of the network. Liver and tumor segmentation. learning model “ 3D-DenseUNet-569 ” for liver and tumor segmentation. takled by end-to-end neural! Image into multiple segments to formulate a loss function for the task of clustering parts images! The padding values are obtained by image reflection at the cost of reduced spatial resolution for image is... This way our approach semantic segmentation of images make use of rich and accurate 3D structure... For example, when all people in a particular image to a category the MAT-file version of U-Net this... That corresponds to this MATLAB command: run the command by entering it in the output of region. Select: pass on the CamVid dataset which were correctly classified combining fine layers and layers! Produce an overlap in the image data people, cars, flowers, trees, buildings,,. ( e.g failed to obtain an accuracy segmentation map produces clear borders around the cells image ground. Of convolutional layers are interspersed with max pooling layers with dilated convolutions information in the image, can... Appears as if the usefulness ( and type ) of data augmentation depends on CamVid! Support medical decision systems, successively decreasing the resolution by summarizing a area... Those of other segmentation methods images from 'train_data.mat ' in an image datastore and pixel label datastore contain. But the rise and advancements in semantic segmentation of images vision task in which we label specific of! Applied semantic segmentation by using the evaluateSemanticSegmentation function in favor of residual blocks the goal is simply! Edge algorithms example exists on your system must satisfy as one object and background as one....
Multiple Histograms In R, Atv Rentals In Blue Ridge, Ga, The Circle - Film, Canadian Embassy Rome Jobs, Peanuts Lighted Nativity Scene, Courtesy Call Hotel Adalah, Dance Away With Me: A Novel,
|
2021-07-25 00:29:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4886089265346527, "perplexity": 2184.860432125976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151531.67/warc/CC-MAIN-20210724223025-20210725013025-00385.warc.gz"}
|
http://clay6.com/qa/15255/a-horizontal-pipe-of-non-uniform-cross-section-allows-water-to-flow-through
|
Want to ask us a question? Click here
Browse Questions
Ad
0 votes
A horizontal pipe of non-uniform cross- section allows water to flow through it with a velocity $1\; ms^{-1}$ when pressure is $50\;kPa$ at a point. If the velocity of flow has to be $2 ms^{-1}$ at some other point the pressure at that point should be
$\begin {array} {1 1} (a)\;50\;kPa & \quad (b)\;100\;kPa \\ (c)\;48.5\;kPa & \quad (d)\;24.25\;kPa \end {array}$
Can you answer this question?
1 Answer
0 votes
$(c)\;48.5\;kPa$
answered Nov 7, 2013 by
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
|
2017-02-21 18:47:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7255633473396301, "perplexity": 5611.004959148053}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170823.55/warc/CC-MAIN-20170219104610-00109-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/another-subspace-prove.297121/
|
# Another subspace prove
1. Mar 4, 2009
### transgalactic
there are two W1 and W2 of F^3 space
dim(W1)=1
dim(W2)=2
prove or desprove that:
$$W1\cap W2$$={0} is the vector space
??
there could be a case where W2 includes W1 then there intersection is not the 0 space
correct??
2. Mar 4, 2009
### Quantumpencil
If there are no other constraints on the problem then yeah.
Take a plane going through the origin and a line contained on that plane in R^3.
That seems like a silly problem though. Do you know anything else like
W1+W2 = W3 (direct sum?)
3. Mar 4, 2009
### transgalactic
can you give an actual example
??
4. Mar 4, 2009
### Quantumpencil
I am just saying your counterexample (or one like it) works as long as there are no other restrictions on the problem.
If we require that W1+W2 = F^3, then it is true, because one space cannot contain the other.
5. Mar 4, 2009
### transgalactic
if W2 include W1
and we have another subspace W3
which
dim W3=1
then dimW2+dimW3=3
what is the problem in that??
6. Mar 4, 2009
### Quantumpencil
Nothing, that's tenable.
|
2017-08-21 02:05:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4506932497024536, "perplexity": 4886.527198842938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107065.72/warc/CC-MAIN-20170821003037-20170821023037-00016.warc.gz"}
|
http://openstudy.com/updates/50ef0cede4b07cd2b649d25b
|
## anonymous 3 years ago Find an exact value. cos 15° Options: A) sqrt 6 + sqrt 2 / 4 B) - sqrt2 + sqrt 6 / 4 C) - sqrt2 - sqrt 6 / 4 D) - sqrt6 + 1 / 4
1. anonymous
A) $\frac{ \sqrt{6}+\sqrt{2} }{ 4 }$ B) $\frac{ -\sqrt{2} +\sqrt{6} }{ 4 }$ C) $\frac{ -\sqrt{2} - \sqrt{6} }{ 4 }$ D) $\frac{ -\sqrt{6} +1}{ 4 }$
2. amoodarya
cos(a-b)=cosa cosb + sina sin b you know? if ok cos (45-30)= cos45 cos 30 +sin 45 sin 30
3. anonymous
where did you get all the numebrs?
4. amoodarya
|dw:1357845575004:dw| I dont get what you mean !
5. anonymous
But its for cos 15° ?
6. amoodarya
It is an unknown angle you have to make it bu known angle
7. amoodarya
you know that formula? if yeah put the numbers in it you will have answer (a)
8. anonymous
I think I got it. Can you help with another?
9. klimenkov
Also, you can use $$\cos^2\frac x 2=\frac{1+\cos x}2$$.
10. anonymous
@klimenkov Can you help with this one, pelase? Write the expression as either the sine, cosine, or tangent of a single angle. $\cos(\frac{ \pi }{ 5 }) \cos(\frac{ \pi }{ 7 }) + \sin (\frac{ \pi }{ 5}) \sin (\frac{ \pi }{ 7 })$
11. amoodarya
as "klimenkov " says you can use that formula shuch the way i draw|dw:1357846727839:dw|
12. amoodarya
cos(π/5)cos(π/7)+sin(π/5)sin(π/7)=cos (π/5-π/7) by the formula AS i sayd before cos (a+b)= cos a cos b-sin a sin b
13. anonymous
so would the cos (a+b) be pi/5+pi/7?
14. klimenkov
No. It will be $$\cos(\frac\pi5-\frac\pi7)$$ as @amoodarya said. Hope you can do this subtraction.
15. amoodarya
no i think that you get it cos (a+b)= cos a cos b-sin a sin b cos (a-b)= cos a cos b+sin a sin b but 2pi/35 is unknown angle
16. anonymous
$\cos(\frac{ \pi }{ 5 }+\frac{ \pi }{ 7 })$ is 2pi/35? Can you show me how to fill in the formula? I think it'd make it easier for me.
17. amoodarya
cos (a-b)= cos a cos b+sin a sin b a=pi/5 b=pi/7
18. anonymous
cos (pi/5-pi/7)= cos (pi/5) cos (pi/7)+sin (pi/5) sin (pi/7) ?
19. klimenkov
Yes.
20. anonymous
So the answers 2pi/35?
21. klimenkov
No. The answer is $$\cos(\frac{2\pi}{35})$$.
22. anonymous
over 25?
23. klimenkov
No. Zoom in.
24. anonymous
That's what I said?
25. klimenkov
You said this without cosine.
|
2016-05-27 10:33:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.903077244758606, "perplexity": 8202.615576214974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276567.28/warc/CC-MAIN-20160524002116-00009-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://publications.mfo.de/handle/mfo/3703
|
Recently added
Now showing items 1-20 of 43
• 1939 - Toric Geometry
[OWR-2019-43] (2019) - (22 Sep - 28 Sep 2019)
Toric geometry is a subfield of algebraic geometry with rich interactions with geometric combinatorics, and many other fields of mathematics. This workshop brought together a broad range of mathematicians interested in ...
• 1938 - Large Scale Stochastic Dynamics
[OWR-2019-42] (2019) - (15 Sep - 21 Sep 2019)
The goal of this workshop was to explore the recent advances in the mathematical understanding of the macroscopic properties which emerge on large space-time scales from interacting microscopic particle systems. There ...
• 1937 - Many-Body Quantum Systems
[OWR-2019-41] (2019) - (08 Sep - 14 Sep 2019)
The interaction among fundamental particles in nature leads to many interesting effects in quantum statistical mechanics; examples include superconductivity for charged systems and superfluidity in cold gases. It is a huge ...
• 1936 - Innovative Approaches to the Numerical Approximation of PDEs
[OWR-2019-40] (2019) - (01 Sep - 07 Sep 2019)
This workshop was about the numerical solution of PDEs for which classical approaches, such as the finite element method, are not well suited or need further (theoretical) underpinnings. A prominent example of PDEs for ...
• 1935 - Geometric, Algebraic, and Topological Combinatorics
[OWR-2019-39] (2019) - (25 Aug - 31 Aug 2019)
The 2019 Oberwolfach meeting "Geometric, Algebraic and Topological Combinatorics" was organized by Gil Kalai (Jerusalem), Isabella Novik (Seattle), Francisco Santos (Santander), and Volkmar Welker (Marburg). It covered a ...
• 1934 - Mathematical Aspects of Hydrodynamics
[OWR-2019-38] (2019) - (18 Aug - 24 Aug 2019)
The workshop dealt with the partial differential equations that describe fluid motion and related topics. These topics included both inviscid and viscous fluids in two and three dimensions. Some talks addressed aspects ...
• 1933 - C*-Algebras
[OWR-2019-37] (2019) - (11 Aug - 17 Aug 2019)
The subject of Operator Algebras is a flourishing broad area of mathematics which has strong ties to many other areas in mathematics including Functional/Harmonic Analysis, Topology, (non-commutative) Geometry, Geometric ...
• 1932 - Homotopy Theory
[OWR-2019-36] (2019) - (04 Aug - 10 Aug 2019)
The workshop "Homotopy Theory" was organized by Jesper Grodal (Copenhagen), Michael Hill (Los Angeles), and Birgit Richter (Hamburg). It covered a wide variety of topics in homotopy theory, from foundational questions to ...
• 1931 - Computational Multiscale Methods
[OWR-2019-35] (2019) - (28 Jul - 03 Aug 2019)
Many physical processes in material sciences or geophysics are characterized by inherently complex interactions across a large range of non-separable scales in space and time. The resolution of all features on all scales ...
• 1930 - Partial Differential Equations
[OWR-2019-34] (2019) - (21 Jul - 27 Jul 2019)
The workshop dealt with nonlinear partial differential equations and some applications in geometry, touching several different topics such as geometric flows, minimal surfaces, semi-linear equations and calculus of variations.
• 1929b - Mathematical Foundations of Isogeometric Analysis
[OWR-2019-33] (2019) - (14 Jul - 20 Jul 2019)
Isogeometric analysis is a recent technology for numerical simulation, unifying computer aided design and finite element analysis. It offers a true design-through-analysis pipeline by employing the same representation ...
• 1929a - Mathematical Theory of Water Waves
[OWR-2019-32] (2019) - (14 Jul - 20 Jul 2019)
Water waves, that is waves on the surface of a fluid (or the interface between different fluids) are omnipresent phenomena. However, as Feynman wrote in his lecture, water waves that are easily seen by everyone, and which ...
• 1928 - Dynamische Systeme
[OWR-2019-31] (2019)
This workshop continued the biannual series at Oberwolfach on Dynamical Systems that started as the Moser-Zehnder meeting'' in 1981. The main themes of the workshop are the new results and developments in the area of ...
• 1927 - Differentialgeometrie im Grossen
[OWR-2019-30] (2019) - (30 Jun - 05 Jul 2019)
The topics discussed at the meeting reflected current trends in global differential geometry. These topics included complex geometry, Einstein metrics, geometric flows, metric geometry and manifolds satisfying curvature bounds.
• 1926 - Algebraic K-theory
[OWR-2019-29] (2019) - (23 Jun - 29 Jun 2019)
Algebraic $K$-theory has seen a fruitful development during the last three years. Part of this recent progress was driven by the use of $\infty$-categories and related techniques originally developed in algebraic ...
• 1925b - Statistical Methodology and Theory for Functional and Topological Data
[OWR-2019-28] (2019) - (16 Jun - 22 Jun 2019)
The workshop focuses on the statistical analysis of complex data which cannot be represented as realizations of finite-dimensional random vectors. An example of such data are functional data. They arise in a variety of ...
• 1925a - Logarithmic Enumerative Geometry and Mirror Symmetry
[OWR-2019-27] (2019) - (16 Jun - 22 Jun 2019)
The new field of log enumerative geometry has formed at the crossroads of mirror symmetry, Gromov-Witten theory and log geometry. This workshop has been the first to promote this field and bring together the junior and ...
• 1923 - Mixed-integer Nonlinear Optimization: a hatchery for modern mathematics
[OWR-2019-26] (2019) - (02 Jun - 08 Jun 2019)
The second MFO Oberwolfach Workshop on Mixed-Integer Nonlinear Programming (MINLP) took place between 2nd and 8th June 2019. MINLP refers to one of the hardest Mathematical Programming (MP) problem classes, involving both ...
• 1922 - Foundations and New Horizons for Causal Inference
[OWR-2019-25] (2019) - (26 May - 01 Jun 2019)
While causal inference is established in some disciplines such as econometrics and biostatistics, it is only starting to emerge as a valuable tool in areas such as machine learning and artificial intelligence. The ...
• 1921 - Nonlinear Hyperbolic Problems: modeling, analysis, and numerics
[OWR-2019-24] (2019) - (19 May - 25 May 2019)
The workshop gathered together leading international experts, as well as most promising young researchers, working on the modelling, the mathematical analysis, and the numerical methods for nonlinear hyperbolic ...
|
2020-11-25 08:38:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3115708529949188, "perplexity": 2294.768137216883}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181482.18/warc/CC-MAIN-20201125071137-20201125101137-00104.warc.gz"}
|
https://cs50.harvard.edu/college/2020/fall/psets/7/fiftyville/
|
# Fiftyville
Write SQL queries to solve a mystery.
## A Mystery in Fiftyville
The CS50 Duck has been stolen! The town of Fiftyville has called upon you to solve the mystery of the stolen duck. Authorities believe that the thief stole the duck and then, shortly afterwards, took a flight out of town with the help of an accomplice. Your goal is to identify:
• Who the thief is,
• What city the thief escaped to, and
• Who the thief’s accomplice is who helped them escape
All you know is that the theft took place on July 28, 2020 and that it took place on Chamberlin Street.
How will you go about solving this mystery? The Fiftyville authorities have taken some of the town’s records from around the time of the theft and prepared a SQLite database for you, fiftyville.db, which contains tables of data from around the town. You can query that table using SQL SELECT queries to access the data of interest to you. Using just the information in the database, your task is to solve the mystery.
## Getting Started
• Execute cd to ensure that you’re in ~/ (i.e., your home directory, aka ~).
• If you haven’t already, execute mkdir pset7 to make (i.e., create) a directory called pset7 in your home directory.
• Execute cd pset7 to change into (i.e., open) that directory.
• Execute wget https://cdn.cs50.net/2020/fall/psets/7/fiftyville/fiftyville.zip to download a (compressed) ZIP file with this problem’s distribution.
• Execute unzip fiftyville.zip to uncompress that file.
• Execute rm fiftyville.zip followed by yes or y to delete that ZIP file.
• Execute ls. You should see a directory called fiftyville, which was inside of that ZIP file.
• Execute cd fiftyville to change into that directory.
• Execute ls. You should see a fiftyville.db file, a log.sql file, and an answers.txt file.
## Specification
For this problem, equally as important as solving the mystery itself is the process that you use to solve the mystery. In log.sql, keep a log of all SQL queries that you run on the database. Above each query, label each with a comment (in SQL, comments are any lines that begin with --) describing why you’re running the query and/or what information you’re hoping to get out of that particular query. You can use comments in the log file to add additional notes about your thought process as you solve the mystery: ultimately, this file should serve as evidence of the process you used to identify the thief!
Once you solve the mystery, complete each of the lines in answers.txt by filling in the name of the thief, the city that the thief escaped to, and the name of the thief’s accomplice who helped them escape town. (Be sure not to change any of the existing text in the file or to add any other lines to the file!)
Ultimately, you should submit both your log.sql and answers.txt files.
## Hints
• Execute sqlite3 fiftyville.db to begin running queries on the database.
• While running sqlite3, executing .tables will list all of the tables in the database.
• While running sqlite3, executing .schema TABLE_NAME, where TABLE_NAME is the name of a table in the database, will show you the CREATE TABLE command used to create the table. This can be helpful for knowing which columns to query!
• You may find it helpful to start with the crime_scene_reports table. Start by looking for a crime scene report that matches the date and the location of the crime.
• See this SQL keywords reference for some SQL syntax that may be helpful!
## Testing
Execute the below to evaluate the correctness of your code using check50.
check50 cs50/problems/2020/fall/fiftyville
## How to Submit
You must submit both your log.sql file and your answers.txt file for this problem.
1. Download your log.sql and your answers.txt files by control-clicking or right-clicking on the files in CS50 IDE’s file browser and choosing Download.
2. Go to CS50’s Gradescope page.
3. Click “Problem Set 7: Fiftyville”.
4. Drag and drop your log.sql and your answers.txt files to the area that says “Drag & Drop”. Be sure that each file is correctly named!
|
2021-10-26 08:52:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1978553682565689, "perplexity": 3783.0628366648475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00066.warc.gz"}
|
https://www.matematica.pt/en/faq/multiply-negative-numbers.php
|
# If I multiply two negative numbers the result will be a positive one, why does it happen?
#### short answers for big questions
In order to understand this multiplication we have first to understand what’s the meaning of multiplying a positive number by a negative one. Let’s suppose that we have to pay a loan of 300 Euros every month. It means that every month the bank will change the balance of our bank account upon the value of -300. After 6 months how much money has been withdrawn from the bank to pay the loan? It is easy to answer. We just have to do the following: 6 xx -300 = -1800. Let’s now suppose that one of our well off uncles has decided to pay us the loan during a year. How much will you gain if it happens? In this case we will forget about the fact that our account will have a discount of -300 Euros every month during one year. So, we will have: -12 xx -300 = 3600. So, at the end of the year we will have saved 3600 Euros.
## I did not understand: could you give another example?
There are several examples and so it wouldn’t be difficult to give you a mathematical evidence concerning the concept that underlies the multiplication of two negative numbers. However, to avoid the use of formulas, I will show you another practical example. According to our vocabulary, the words “No” and “Never” often have a negative connotation. For instance: “Today you are not going to visit your cousin” is the opposite of saying “Today you are going to visit your cousin”. Let’s us suppose a sentence that includes these two negative words: “I do not want you to never visit your cousin”. This double negative can be seen as a request, meaning that you sometimes have to visit the mentioned relative, that is, the two negatives neutralize each other. In Mathematics something similar happens when we multiply two negative numbers and the result is appositive number!
Check out our List of Questions to get to know a little more about the most diverse topics related to mathematics. If you have any pertinent (math) question whose answer can not easily be found, send us an email on the Contact page with the question. We will be happy to respond. In the event that you detect any errors in our answers, do not hesitate to contact us!
|
2022-11-27 01:24:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3876411020755768, "perplexity": 423.31346258934707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710155.67/warc/CC-MAIN-20221127005113-20221127035113-00342.warc.gz"}
|
https://socratic.org/questions/how-do-you-graph-y-1-4x-4-by-plotting-points
|
# How do you graph y=1/4x-4 by plotting points?
Jul 7, 2015
You need only two points to plot your function, that represents a straight line.
#### Explanation:
This is a linear function that will give you, when plotted, a straight line.
First you can see that the slope of your line is positive $m = \frac{1}{4}$ so your line is "going up" (from left to right).
Also, the line crosses the y-axis at $- 4$.
So basically when $x = 0$ then $y = - 4$. This is one of the points you need.
To plot a line you need only 2 points, so, the next could be chosen at $x = 4$ that gives you:
$x = 4$ then $y = \frac{1}{4} \cdot 4 - 4 = 1 - 4 = - 3$
So, you can now plot your line through points of coordinates:
${P}_{1}$: $x = 0 , y = - 4$
${P}_{2}$: $x = 4 , y = - 3$
|
2019-01-23 03:24:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8883530497550964, "perplexity": 498.62047749961056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583884996.76/warc/CC-MAIN-20190123023710-20190123045710-00222.warc.gz"}
|
https://www.patterncomputer.com/blogs/brute-force/
|
# Brute Force
by Michael Dushkoff |
In this post, we present a more detailed analysis of the problem of exploring high dimensional spaces to discover interesting patterns within data. This subject was touched upon in our Splash event video, but we’d like to further expound upon it here to clarify more explicitly the size of the problem and why it is quite simply intractable via brute force methods.
The brute force approach to finding patterns within a dataset would be to explore all patterns within a space of n variables and order r — the number of variables that are potentially related to a particular outcome.
Generally, the total number of patterns in n variables of order r is:
$$C(n,r)=\binom{n}{r} = \frac{n!}{r!(n-r)!}$$ This assumes patterns can be found without regard to the orientation, or permutation of variables (A before B before C etc.) If the orientation of variables matters, the problem size is then:
$$P(n,r) = \frac{n!}{(n-r)!}$$ Which is considerably larger.
Let’s consider a very small problem where we want to analyze human gene data. In this dataset, n=20,000 genes and we’d like to find patterns of r = {3,4,5,6} order relationships wherein the orientation of variables in an interaction does not matter.
We assume at the extreme, lowest end, it would take at minimum 1 floating point operation (FLOP). Let’s say that there are 1,000 patients. In this case, it would take 1,000 FLOPs to evaluate each pattern. This is of course a very low estimate and in reality it is most likely 1-3 orders of magnitude larger, but let’s continue with this assumption.
Next, we would like to determine how long it would take to explore this space. To do this we must consider the total time it takes to explore the space and how fast you can explore it. If we were to utilize a computer that could operate at a specific number of operations per second, we can calculate how much time it would take to explore the entire space of patterns that exist. We used 10{17} FLOP/s as this rate of exploration, which is faster than the current fastest supercomputer Sunway Taihu Light which runs at a reported 9.3∗1016 FLOP/s as of November 2017 (https://www.top500.org/lists/2017/11/). We recognize that this will scale further in the coming years.
The whole computation is then as follows.
For the lowest r=3:
$$\frac{\binom{20,000}{3} \times 1000 : FLOPs} {10^{17} : FLOP/s}$$
which is around 1.3∗10-2 seconds.
For the highest r=6:
$$\frac{\binom{20,000}{6} \times 1000 : FLOPs} {10^{17} : FLOP/s}$$
which is around 8.8∗108 seconds, or ~28 years.
The true intrinsic patterns within a dataset, however may not just be limited to 3, 4, 5, or 6-order patterns and may be even considerably higher, in which case it rapidly becomes even more impossible to explore by brute force.
The following graph shows the time that it would take to explore all possible patterns of a given order with the above assumptions (note, the y-axis is logarithmic):
As one can see, the problem space very quickly becomes intractable to approach as the search for more interacting variables widens. Therefore, an efficient, elegant algorithmic approach to this is absolutely necessary to explore higher order patterns since it simply cannot currently be explored by brute force in reasonable human timescales.
|
2019-06-24 17:27:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48634597659111023, "perplexity": 493.4493613226871}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999620.99/warc/CC-MAIN-20190624171058-20190624193058-00209.warc.gz"}
|
https://socratic.org/questions/how-many-moles-are-present-in-454-grams-of-co-2
|
# How many moles are present in 454 grams of CO_2?
Moles of $C {O}_{2}$ $=$
(454*g)/(44*g*mol^(-1)) = ?? mol
|
2019-08-18 21:58:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4730672240257263, "perplexity": 4652.86145985315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314130.7/warc/CC-MAIN-20190818205919-20190818231919-00514.warc.gz"}
|
http://cloud.originlab.com/doc/en/Origin-Help/Split-Columns
|
# 4.4.9 Split Columns
If you want to split a or multiple columns into multiple columns, you can use the Split Columns:colsplit tool.
To open the Split Columns tool, you can
• Select Worksheet: Split Columns with the workbook window activated.
or
• Run colsplit -d; in the Command Window.
This tool will split the specified columns by subgrouping. Here, we have three subgrouping methods:
• By Every Nth Row
Every Nth row will be considered as a group. For example, if N is 3, there would be 3 groups, the first group has the data from rows 1, 4, 7 ... 1 + 3 * M, the second group has the data from rows 2, 5, 8, ... 2 + 3 * M, and so on.
• By Sequential N Rows
The sequential N rows will be considered as a group. For example, if N is 5, the data from row 1 to row 5 will be the first group, from row 6 to 10 will be second group, and so on.
• By Reference Column
Specify a reference column to split the source column. You can split the source column according to the values in reference column:
• By Separator
You can specify a value(include <Blank or Missing> or <Text>) as separator to divide the source column into multiple column.
Once you have specified the Separator Value, you can decide how to handle the rows meet the conditions with the Rows Meet Condition option: 1) Remove the rows(Just like the one in the screenshot above shows); 2) Use the row as begin of the new block; 3) Use the row as end of the current block.
You can also decide whether to treat the consecutive rows(meet the separator condition) as one separator with the Treat Consecutive Rows as One check box.
• By Interval
You can specify the interval and start value to define multiple ranges in reference column to split the source column.
In this sample, the data points in the specified ranges(0.2~0.5, 0.5~0.8 and 0.8~1.1 and so on) have been picked up and stored in different subgroups. To specify the range, you just need set start value and interval.
You can sort the result subgroups by the values in the reference column by checking the Sort by Reference check box.
|
2022-05-17 04:46:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5246866941452026, "perplexity": 1302.178852837104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515501.4/warc/CC-MAIN-20220517031843-20220517061843-00435.warc.gz"}
|
https://alpof.wordpress.com/2021/07/16/strange-spaces-of-networks-and-chords/
|
# Strange spaces of networks and chords
From June, 28th, 2021 to July, 1st, 2021, the University of Pavia organized a mathematics and music workshop, in which I presented a summary of the work on the categorical formalization of musical networks (the videos of the different talks will probably be put on the site in a near future). An interesting part of this workshop is that it held an online concert at the end, asking for participants to contribute with a short piece of music based on mathematical ideas.
I decided to participate in this, and submitted a short piece for piano based on networks. Here it is:
Let me explain a little bit what is going on in this piece. As you know from previous blog posts, one can study networks of musical elements in which their interrelations are indicated by arrows labelled in an appropriate group (or category) of transformations. One example that I’ve shown before on this blog, and which I like very much, is the following passage in Webern’s op.11/2.
These three groups of three notes can be described with the following networks.
All these networks share the same $I_8$, $I_9$, and $T_1$ transformations, even though these pitch-class sets are not transpositionally or inversionally related. Going further, we can create all twelve networks based on the same set of transformations, which form the harmonic material of the piece above.
As seen before with network transformations, we can go from one of these chords to another by parsimonious voice-leading, as described below.
But there is more ! You may have noticed that some of these chords have common tones. Thus, we can imagine to stitch these chords whenever they have two tone in common, just as we would do for ordinary major and minor chords to get the neo-Riemannian Tonnetz. Except that in this particular case, we get the following space.
If we combine both the voice-leading transformations (an action of the $\mathbb{Z}_{12}$ group, really) and the common-tone relations, we arrive at the following diagram, where the former relations are represented with thick plain lines, and the latter are represented with dashed lines.
The piece starts with the Bb-(Bb)-B chord and ends on the E-(E)-F chord by following a specific path represented below.
This path alternates between voice-leading transformations by semitone and common-tone relations. The transition between G-C#-Ab and C#-G-D is particularly emphasized in the piece, because these networks are located a tritone apart and yet share two common tones. Since Bb and E play important roles (they are the fixed points of $I_8$), they recur in the piece, often interacting with the pitch classes of the network to produce additional harmonies.
So far, we have seen only one network structure, but we can take any other kind of network. However, we will not get the same spaces. For example, if we take the following network structure (color coded here in orange),
take the corresponding chords, and stitch them if they have two pitch classes in common, we will get the following disconnected space.
Fun stuff happens when we start mixing more than one network structure. For example, if we do the same operation with the two network structures below,
we end up with the following space.
This has the topology of a torus (notice how it wraps on both sides), but this is very different from a neo-Riemannian Tonnetz because of the inversion transformations !
Not all network structures will give a torus however. Take for example the two network structures below.
Stitching the networks of the first one alone will give you a single band, whereas stitching the networks of the second one alone will give you three disconnected tetrahedrons. With the two together, we get the following space,
also represented as shown below in its unfolded form.
It surely would be interesting to compose some music based on these chords/networks and the associated properties of the spaces they generate !
## One comment
1. Nice ❣ Shared on ZulipChat • Category Theory
|
2022-01-24 10:27:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7244045734405518, "perplexity": 815.4737216134978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304528.78/warc/CC-MAIN-20220124094120-20220124124120-00144.warc.gz"}
|
https://deepai.org/publication/analysis-of-a-fully-discrete-approximation-for-the-classical-keller-segel-model-lower-and-a-priori-bounds
|
# Analysis of a fully discrete approximation for the classical Keller–Segel model: lower and a priori bounds
This paper is devoted to constructing approximate solutions for the classical Keller–Segel model governing chemotaxis. It consists of a system of nonlinear parabolic equations, where the unknowns are the average density of cells (or organisms), which is a conserved variable, and the average density of chemioattranct. The numerical proposal is made up of a crude finite element method together with a mass lumping technique and a semi-implicit Euler time integration. The resulting scheme turns out to be linear and decouples the computation of variables. The approximate solutions keep lower bounds – positivity for the cell density and nonnegativity for the chemioattranct density –, are mass conservative, satisfy a discrete energy law, and have a priori energy estimates. The latter is achieved by means of a discrete Moser–Trudinger inequality. As far as we know, our numerical method is the first one that can be encountered in the literature dealing with all of the previously mentioned properties at the same time.
There are no comments yet.
## Authors
• 1 publication
• 1 publication
10/13/2019
### A Conservative Finite Element Method for the Incompressible Euler Equations with Variable Density
We construct a finite element discretization and time-stepping scheme fo...
09/22/2021
### Numerical analysis of a finite element formulation of the P2D model for Lithium-ion cells
The mathematical P2D model is a system of strongly coupled nonlinear par...
11/07/2020
### Bresse-Timoshenko type systems with thermodiffusion effects: Well-possedness, stability and numerical results
Bresse-Timoshenko beam model with thermal, mass diffusion and theormoela...
05/31/2021
### Energy-preserving fully-discrete schemes for nonlinear stochastic wave equations with multiplicative noise
In this paper, we focus on constructing numerical schemes preserving the...
08/09/2019
### Numerical analysis for a chemotaxis-Navier-Stokes system
In this paper we develop a numerical scheme for approximating a d-dimens...
12/07/2021
### Finite Element numerical schemes for a chemo-attraction and consumption model
This work is devoted to design and study efficient and accurate numerica...
07/05/2017
### On the Fusion of Compton Scatter and Attenuation Data for Limited-view X-ray Tomographic Applications
In this paper we demonstrate the utility of fusing energy-resolved obser...
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1. Introduction
### 1.1. Aims
In 1970/71 Keller and Segel [15, 16] attempted to derive a set of equations for modeling chemotaxis – a biological process through which an organism (or a cell) migrates in response to a chemical stimulus being attractant or repellent. It is nowadays well-known that the work of Keller and Segel turned out to be somehow biologically inaccurate since their equations provide unrealistic solutions; a little more precisely, solutions that blow up in finite time. Such a phenomenon does not occur in nature. Even though the original Keller–Segel equations are less relevant from a biological point of view, they are mathematically of great interest.
Much of work for the Keller–Segel equations has been carried out in developing purely analytical results, whereas there is very few numerical results in the literature. This is due to the fact that solving numerically the Keller–Segel equations is a challenging task because their solutions exhibit many interesting mathematical properties which are not easily adapted to a discrete framework. For instance, solutions to the Keller–Segel equations satisfy lower bounds (positivity and non-negativity) and enjoy an energy law, which is obtained by testing the equations against non linear functions. Cross-diffusion mechanisms governing the chemotactic phenomena are the responsible for the Keller–Segel equations to be so difficult not only theoretically but also numerically.
In spite of being a limited model, it is hoped that developing and analyzing numerical methods for the classical Keller–Segel equations may open new roads to deeper insights and better understandings for dealing with the numerical approximation of other chemotaxis models – biologically more realistic –, but which are, however, inspired on the original Keller–Segel formulation. In a nutshell, these other chemotaxis models are modifications of the Keller–Segel equations in order to avoid the non-physical blow up of solutions and hence produce solution being closer to chemotaxis phenomena. For these other chemotaxis models, it is recommended the excellent surveys of Hillen and Painter [11], Horstamann [12, 13], and, more recently, Bellomo, Bellouquid, Tao, and Winkler [1]. In these surveys the authors reviewed to date as to when they were written the state of art of modeling and mathematical analysis for the Keller–Segel equations and their variants.
It is our aim in this work to design a fully discrete algorithm for the classical Keller–Segel equations based on a finite element discretization whose discrete solutions satisfy lower and a priori bounds.
### 1.2. The Keller–Segel equations
Let be a bounded domain, with
being its outward-directed unit normal vector to
, and let be a time interval. Take and . Then the boundary-value problem for the Keller–Segel equations reads a follows. Find and satisfying
(1) {∂tu−Δu=−∇⋅(u∇v) in Q,∂tv−Δv=u−v in Q,
subject to the initial conditions
(2) u(0)=u0 and v(0)=v0 in Ω,
and the boundary conditions
(3) ∇u⋅n=0 and ∇v⋅n=0 on Σ.
Here denotes is the average density of organisms (or cells), which is a conserved variable, and is the average density of chemioattranct, which is a nonconserved variable.
System (1) was motivated by Keller and Segel [15] describing the aggregation phenomena exhibited by the amoeba Dictyostelium discoideum due to an attractive chemical substance referred to as chemoattractant, which is generated by the own amoeba and is, nevertheless, degraded by living conditions. Moreover, diffusion is also presented in the motion of ameobae and chemoattractant.
The diffusion phenomena performed by cells and chemoattractant are modelled by the terms and , respectively, while the aggregation mechanism is described by the term . It is this nonlinear term that is the major difficulty in studying system (1). Further the production and degradation of chemoattractant are associated with the term .
Concerning the mathematical analysis for system (1), Nagai, Senba, and Yoshida [17] proved existence, uniqueness and regularity of solutions under the condition by means of one particular instance of Moser–Tridinguer’s inequality; in the particular case that be a ball, such a condition becomes , and Herrero and Velázquez [10] dealt with the first blow up framework by constructing some radially symmetric solutions which blow up within finite time. The next progress in this sense with being non-radial and simply connected was the work of Horstmann and Wang [14] who found some unbounded solutions provided that and . So far there is no supporting evidence as to whether such solutions may evolve to produce a blow-up phenomenon within finite time or whether, on the contrary, may increase to infinity with time. The main tool in proving blow-up solutions is the energy law which stems from system (1). Observe that an inadequate approximation of lower bounds can trigger oscillations of the variables, which can lead to spurious, blow-up solutions.
Concerning the numerical analysis for system (1), very little is said about numerical algorithms which keep lower bounds, are mass conservative and have a discrete energy law. Proper numerical treatment of these properties is made difficult by the fact that the non-linearity occurs in the highest order derivative. Numerical algorithms are mainly designed so as to keep lower bounds. We refer the reader to [18, 5, 22, 3, 21] which mainly deal with lower bounds and mass conservation. As far as we are concerned, there is no numerical method coping with a discrete energy law.
### 1.3. Notation
We collect here as a reference some standard notation used throughout the paper. For , we denote by the usual Lebesgue space, i.e.,
Lp(Ω)={v:Ω→R:v Lebesgue-measurable,∫Ω|v(x)|pdx<∞}.
or
L∞(Ω)={v:Ω→R:v Lebesgue-measurable,esssupx∈Ω|v(x)|<∞}.
This space is a Banach space endowed with the norm if or if . In particular, is a Hilbert space. We shall use for its inner product and for its norm.
Let be a multi-index with , and let be the differential operator such that
∂α=(∂∂x1)α1(∂∂x2)α2.
For and , we consider to be the Sobolev space of all functions whose derivatives are in , i.e.,
Wm,p(Ω)={v∈Lp(Ω):∂kv∈L2(Ω) ∀ |k|≤m}
associated to the norm
∥f∥Wm,p(Ω) =⎛⎝∑|α|≤m∥∂αf∥pLp(Ω)⎞⎠1/p for 1≤p<∞, ∥f∥Wm,p(Ω) =max|α|≤m∥∂αf∥L∞(Ω), for p=∞.
For , we denote . Moreover, we make of use the space
H2N(Ω)={v∈H2(Ω):∫Ωv(x)dx=0 and ∂nv=0 on ∂Ω},
for which is known that .
### 1.4. Outline
The remainder of this paper is organized in the following way. In the next section we state our finite element space and some tools. In particular, we prove a discrete version of a variant of Moser–Trudinger’s inequality. In section 4, we apply our ideas to discretize system (1) in space and time for defining our numerical method and formulate our main result. Next is section 5 dedicated to demonstrating lower bounds, a discrete energy law, and a priori bounds all of which are local in time for approximate solutions. This is accomplished in a series of lemmas where the final argument is an induction procedure on the time step so as to obtain the above mentioned properties globally in time.
## 2. Technical preliminaries
This section is mainly devoted to setting out the hypotheses and some auxiliary results concerning the finite element space that will use throughout this work.
### 2.1. Hypotheses
We consider the finite element approximation of (1) under the following assumptions on the domain, the mesh and the finite element space. Moreover, these assumptions will be mentioned on stating our main result.
1. Let be a convex, bounded domain of with a polygonal boundary, and let be the minimum interior angle at the vertices of .
2. Let be a family of acute, shape-regular, quasi-uniform triangulations of made up of triangles, so that , where , with being the diameter of . More precisely, we assume that
1. there exists , independent of , such that
min{diamBT:T∈Th}≥αh,
where is the largest ball contained in , and
2. there exists such that every angle between two edges of a triangle is bounded by .
Further, let be the coordinates of the nodes of .
3. Associated with is the finite element space
Xh={xh∈C0(¯¯¯¯Ω):xh|T∈P1(T), ∀T∈Th},
where is the set of linear polynomials on . Let be the standard basis functions for .
### 2.2. Auxiliary results
In the subsequent sections, we will make use of some technical results concerning the above-mentioned hypotheses.
A key tool for proving lower bounds for the finite element approximation of (1) is the acuteness of .
###### Proposition 2.1.
Let be a polygonal. Consider to be constructed over being acute. Then, for each with vertices , there exists a constant , depending on , but otherwise independent of and , such that
(4) ∫T∇φai⋅∇φajdx≤−Cneg
for all with , and
(5) ∫T∇φai⋅∇φaidx≥Cneg
for all .
A proof of (4) and (5) can be found in [9].
Both local and global finite element properties for
will be needed such as inverse estimates and bounds for the interpolation error. We first recall some local inverse estimates. See
[2, Lem. 4.5.3] or [6, Lem. 1.138] for a proof.
###### Proposition 2.2.
Let be polygonal. Consider to be constructed over being quasi-uniform. Then, for each and , there exists a constant , independent of and , such that, for all ,
(6) ∥∇xh∥Lp(T)≤Cinvh−1∥xh∥Lp(T)
and
(7) ∥∇xh∥L∞(T)≤Cinvh−2p∥∇xh∥Lp(T).
Concerning global inverse inequalities, we need the following.
###### Proposition 2.3.
Let be polygonal. Consider to be constructed over being quasi-uniform. Then for each , there exists a constant , independent of , such that, for all ,
(8) ∥xh∥L∞(Ω)≤Cinvh−1∥xh∥,
(9) ∥∇xh∥Lp(Ω)≤Cinvh−1∥xh∥Lp(Ω),
(10) ∥∇xh∥Lp(Ω)≤Cinvh−2(12−1p)∥∇xh∥L2(Ω),
and
(11) ∥∇xh∥L∞(Ω)≤Cinvh−2p∥∇xh∥Lp(Ω).
We introduce , the standard nodal interpolation operator, such that for all . Associated with , a discrete inner product on is defined by
(xh,¯xh)h=∫ΩIh(xh(x)¯xh(x))dx.
We also introduce
∥xh∥h=(xh,xh)12h.
Local and global error bounds for are as follows (c.f. [2, Thm. 4.4.4] or [6, Thm. 1.103] for a proof).
###### Proposition 2.4.
Let be polygonal. Consider to be constructed over being quasi-uniform. Then, for each , there exists , independent of and , such that
(12) ∥φ−Ihφ∥L1(T)≤Capph2∥∇2φ∥L1(T)∀φ∈W2,1(T).
###### Proposition 2.5.
Let be polygonal. Consider to be constructed over being quasi-uniform. Then it follows that there exists , independent of , such that
(13) ∥∇(φ−Ihφ)∥L2(Ω)≤Capph∥Δφ∥L2(Ω)∀φ∈H2(Ω).
###### Corollary 2.6.
Let be polygonal. Consider to be constructed over being quasi-uniform. Let . Then it follows that there exist two positive constants and , independent of , such that
(14) ∥xnh−Ih(xnh)∥L1(Ω)≤Cappn(n−1)h2∫Ω|xh(x)|n−2|∇xh(x)|2dx,
(15) ∥xh¯¯¯xh−Ih(xh¯¯¯xh)∥L1(Ω)≤Ccomh∥xh∥L2(Ω)∥∇¯¯¯xh∥L2(Ω)
and
(16) ∥xnh∥L1(Ω)≤∥Ih(xnh)∥L1(Ω)≤Csta∥xnh∥L1(Ω),
where depends on .
###### Proof.
Let and compute
∇2(xnh)=n(n−1)xn−2hd∑i,j=1∂ixh∂jxh on T.
Then, from (12) and the above identity, we have
∥xnh−Ih(xkh)∥L1(T)≤Capph2K∥∇2(xnh)∥L1(T)≤Cappn(n−1)h2∫T|xh(x)|n−2|∇xh(x)|2dx.
Summing over yields (14). The proof of (15) follows very closely the arguments of (14) for . The first part of assertion (16) is a simple application of Jensen’s inequality, while the second part follows from (14) on using Hölder’s inequality, (9) for and, later on, Minkowski’s inequality. ∎
The proof of the following proposition can be found in [4]. It is a generalization of a Moser–Trudinger-type inequality.
###### Proposition 2.7 (Moser-Trudinger).
Let be polygonal with being the minimum interior angle at the vertices of . Then there exists a constant depending on such that for all with and , it follows that
(17) ∫Ωeα|u(x)|2dx≤CΩ
where .
###### Corollary 2.8.
Let be polygonal with being the minimum interior angle at the vertices of . Consider to be constructed over being quasi-uniform. Let with . Then it follows that there exists a constant , independent of , such that
(18) ∫ΩIh(euh(x))dx≤CΩ(1+CMT∥∇uh∥2)e18θΩ∥∇uh∥2+1|Ω|∥uh∥L1(Ω).
###### Proof.
From (14), we have
(19) ∫ΩIh(euh(x))dx=∫Ω(1+uh(x))dx+∞∑n=21n!∫ΩIh(unh(x))dx≤∞∑n=01n!∫Ωunh(x)dx+∞∑n=2Cappn(n−1)h2n!∫Ω|∇uh(x)|2un−2h(x)dx=∫Ω(1+Capph2|∇uh(x)|2)euh(x)dx.
Let with . Young’s inequality gives
(20) uh=∥∇uh∥vh+m≤18θΩ∥∇uh∥2+2θΩ|vh|2+m.
Thus, combining (19) and (20) yields, on noting (7) for and (17), that
∫ΩIh(euh(x))dx≤e18θΩ∥∇uh∥2+m∫Ω(1+Capph2∥∇uh(x)∥2|∇vh|2)e2θΩ|vh(x)|2dx≤e18θΩ∥∇uh∥2+m(1+CappCinv∥∇uh(x)∥2)∫Ωe2θΩ|vh(x)|2dx≤CΩ(1+CappCinv∥∇uh(x)∥2)e18θΩ∥∇uh∥2+m.
An (average) interpolation operator into will be required in order to properly initialize our numerical method. We refer to [19, 7].
###### Proposition 2.9.
Let be polygonal. Consider to be constructed over being quasi-uniform. Then there exists an (average) interpolation operator from to such that
(21) ∥Qhψ∥Ws,p(Ω)≤Csta∥ψ∥Ws,p(Ω)for s=0,1 and 1≤p≤∞,
and
(22) ∥Qh(ψ)−ψ∥Ws,p(Ω)≤Capph1+m−s∥ψ∥Wm+1,p(Ω)for 0≤s≤m≤1.
Moreover, let be defined from to as
(23) −(~Δhxh,¯xh)h=(∇xh,∇¯xh) for all ¯xh∈Xh,
and let be such that
(24) {−Δx(h)=−~Δhxh in% Ω,∂nx(h)=0 on ∂Ω.
From elliptic regularity theory, the well-posedness of (24) is ensured by the convexity assumption stated in and
∥x(h)∥H2N(Ω)≤C∥−˜Δhxh∥.
See [8] for a proof.
###### Proposition 2.10.
Let be a convex polygon. Consider to be constructed over being quasi-uniform. Then there exists a constant , independent of , such that
(25) ∥∇(x(h)−xh)∥L2(Ω)≤CLaph∥~Δhxh∥L2(Ω).
###### Proof.
We refer the reader to [9] for a proof which uses (13) and (15). ∎
###### Corollary 2.11.
Let be a convex polygon. Consider to be constructed over being quasi-uniform. Then, for each , there exists a constant , independent of , such that
(26) ∥∇xh∥Lp(Ω)≤Csta∥−˜Δhxh∥.
###### Proof.
The triangle inequality gives
∥∇xh∥Lp(Ω)≤∥∇xh−∇Qhx(h)∥Lp(Ω)+∥∇Qhx(h)−∇x(h)∥Lp(Ω)+∥∇x(h)∥Lp(Ω)
and hence applying (10), (25), (21), (22), and Sobolev’s inequality yields (26).
## 3. Presentation of main result
We now define our numerical approximation of system (1). Assume that with and a. e. in .
We begin by approximating the initial data by as follows. Define
(27) u0h=Qhu0,
which satisfies
(28) u0h>0 a. e. in Ω,∥u0h∥L1(Ω)≤Csta∥u0∥L1(Ω) and ∥u0h∥≤Csta∥u0∥
and
(29) v0h=Qhv0,
which satisfies
(30) v0h≥0 a. e. in Ω,∥v0h∥H1(Ω)≤C∥v0∥H1(Ω) and ∥˜Δhv0h∥≤C∥Δv0∥.
Given , we let be a uniform partitioning of [0,T] with time step . Given , find such that
(31) (δtun+1h,xh)h+(∇un+1h,∇xh)=(∇vnh,un+1h∇xh)
and
(32) (δtvn+1h,xh)h+(∇vn+1h,∇xh)+(vn+1h,xh)h=(un+1h,xh)h.
It should be noted that scheme (31)-(32) combines a finite element method together a mass-lumping technique to treat some terms and a semi-implicit time integrator. The resulting scheme is linear and decouples the computation of .
In order to carry out our numerical analysis we must rewrite the chemotaxis term by using a barycentric quadrature rule as follows. Let and consider to be the barycentre of . Then let be the interpolation of into , with being the space of all piecewise constant functions over , defined by
(33) ¯¯¯un+1h|T=un+1h(bT).
As a result, one has
(34) (∇vnh,un+1h∇xh)=∑T∈K|T|∇vnh⋅∇xhun+1h(bT)=(∇vnh,¯¯¯un+1h∇xh).
From now on we will use to denote a generic constant independent of the discretization parameters .
Let us define
(35) E0(uh,vh)=12∥vh∥2h+12∥∇vh∥2−(uh,vh)h+(loguh,uh)h,
(36) E1(uh,vh)=∥uh∥2h+∥˜Δhvh∥2.
and, for each ,
(37) Rε,δ0(u0h,v0h):=1δe+∥u0h∥L1(Ω)δ(CΩε+ε+(1+δ)|Ω|(∥v0h∥L1(Ω)+∥u0h∥L1(Ω))).
Associated with the above definitions, consider
B0(u0h,v0h)=1δE0(u0h,v0h)+Rδ,ε0(u0h,v0h),
B1(u0h,v0h)=(1+1δ)E0(u0h,v0h)+Rδ,ε0(u0h,v0h)+2|Ω|e,
and
B2(u0h,v0h)=E0(u0h,v0h)+B0(u0h,v0h)+B1(u0h,v0h).
Finally, define
F(u0h,v0h)=eB2(u0h,v0h)+T12B122(u0h,v0h)(E0(u0h,v0h)+CTB31(u0h,v0h)+CT∥u0h∥L1(Ω)).
The definition of the above quantities will be apparent later.
We are now prepared to state the main result of this paper.
###### Theorem 3.1.
Assume that hypotheses are satisfied. Let with and , and take and defined by (27) and (29), respectively. Assume that fulfill
(38) Ckh2F(u0h,v0h)<12
and
(39) −Cneg+Ch1−2pF12(u0h,v0h)<0.
Then the sequence computed via (31) and (32) satisfies the following properties, for all :
• Lower bounds:
(40) umh(x)>0
and
(41) vmh(x)≥0
for all ,
• -bounds:
(42) ∥umh∥L1(Ω)=∥u0h∥L1(Ω)
and
(43) ∥vmh∥L1(Ω)≤∥v0h∥L1(Ω)+∥u0h∥L1(Ω).
• A discrete energy law:
(44) E0(umh,vmh)+km∑r=1(∥δtvrh∥2h+k∥A−12h(urh)∇urh−A12h(urh)∇vr−1h∥2)≤E0(u0h,v0h),
where is defined in (60).
Moreover, if we are given such that
(45) Ch1−2pE1(u0h,v0h)≤512,
it follows that
(46) E1(umh,vmh)+k2m∑r=1(∥∇urh∥2+∥∇˜Δhvrh∥2)≤F(u0h,v0h).
As system (31)-(32) is linear, existence follows from uniqueness. The latter is an immediate outcome of a priori bounds for .
## 4. Proof of main result
In this section we address the proof of Theorem 3.1. Rather than prove en masse the estimates in Theorem 3.1, because all of them are connected, we have divided the proof into various subsections for the sake of clarity. The final argument will be an induction procedure on relied on the semi-explicit time discretization employed in (31).
### 4.1. Lower bounds and a discrete energy law
We first demonstrate lower bounds for and, as a consequence of this, a discrete local-in-time energy law is established.
###### Lemma 4.1 (Lower bounds).
Assume that are satisfied. Let and and let
(47) ∥˜Δhvnh∥2≤F(u0h,v0h).
Then if one chooses satisfying (38) and (39), it follows that the solution computed via (31) and (32) are lower bounded, i.e, for all ,
(48) un+1h(x)>0
and
(49) vn+1h(x)≥0.
###### Proof.
Since and are piecewise linear polynomial functions, it will suffice to prove that (48) and (49) hold at the nodes. To do this, let be a fixed triangle with vertices , and choose two of them, i.e. with . Then, from (6), (7), (26), and (47), we have on noting (34) that
(50) ∫T¯¯¯¯φai∇vnh⋅∇φajdx=∫Tφai(bT)∇vnh⋅∇φajdx≤|T|∥φai∥L∞(T)∥∇vn
|
2022-01-27 08:47:13
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8765673637390137, "perplexity": 731.6975593742031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305242.48/warc/CC-MAIN-20220127072916-20220127102916-00632.warc.gz"}
|
https://proofindex.com/?page=blog&post=intro-to-total-functional-programming/measure-functions
|
# Measure functions
The idea of a measure function, is to take every possible tuple of arguments, and map each of them to a natural number, such that each subsequent call maps to a smaller number. For example, consider Euler's GCD function:
$$\text{gcd}(a, b) = \begin{cases} \text{gcd}(a\mathop{\text{mod}}b, a) & \text{if }b \ne 0 \\ a & \text{otherwise} \end{cases}$$
This function cannot be show to terminate by a lexicographic ordering argument, so we need to find a measure function. Call the $$\text{gcd}$$ arguments $$(a, b),$$ and the new arguments $$(a', b').$$ Notice that neither $$a'$$ nor $$b'$$ can be larger than $$a$$ or $$b.$$ Also, notice that either $$a'$$ or $$b'$$ is less $$a$$ or $$b.$$ Thus, a simple measure function, to prove $$\text{gcd}$$ terminates, is $$m(a, b) = a + b.$$
theory Scratch imports Main begin
function gcd :: "nat ⇒ nat ⇒ nat" where
"gcd a b = (if b ≠ 0 then gcd (a mod b) a else a)"
apply auto
done
termination gcd
apply lexicographic_order
oops
termination gcd
apply (relation "measure (λ(a, b). a + b)")
apply simp
apply (subst in_measure)
apply clarify
end
|
2022-05-24 09:06:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9031648635864258, "perplexity": 961.5557972881545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662570051.62/warc/CC-MAIN-20220524075341-20220524105341-00128.warc.gz"}
|
http://math.stackexchange.com/questions/245084/exact-sequences-and-hom
|
# Exact sequences and Hom
I have a exact sequence $$1 \longrightarrow A \overset{\phi}\longrightarrow B \overset{\psi}\longrightarrow C \longrightarrow 1$$ of commutative rings $A,B,C$ and the the exact sequence is such that we only consider the multiplicative structure then if I take a free abelian group $X$, how would I form $$0 \longrightarrow Hom(X,A) \overset{\phi'}\longrightarrow Hom(X,B) \overset{\psi'}\longrightarrow Hom(X,C) \longrightarrow 0$$ since in the second sequence the groups are under addition. I have seen this done many times when both sequences are of additive groups and you just define $\phi'\circ \alpha=\phi \circ \alpha$ and this works as both $\alpha,\phi$ are additive. But I cant see how to do this when $\phi$ is multiplicative and $\phi'$ has to be additive.
Or more generaly if $A,B,C,X$ where all $G$-modules for some finite group $G$
-
## 2 Answers
If I understand your question correctly, then the answer is: it doesn't matter. These are groups, not rings: multiplication and addition mean the same thing, i.e. an application of the group operation. And everything's abelian, so talking about addition is fine.
-
What if $A,B,C$ where rings but in the first exact sequence we had them as groups under multiplication? Ill fix the question as this is what I mean. Sorry – Chris Birkbeck Nov 26 '12 at 17:43
A ring is a group under multiplication if and only if it is the zero ring. You will have to be much more precise about what you're doing here. – Zhen Lin Nov 26 '12 at 18:52
Another thing to add to Clive's answer is that, in case $X$ is indeed a free Abelian group on $\kappa$ generators, then we have $\hom(X,A)\cong \oplus^\kappa A$. That may make your second exact sequence clearer.
-
|
2015-09-02 07:27:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9258720278739929, "perplexity": 211.46273669025268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645257890.57/warc/CC-MAIN-20150827031417-00348-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://brilliant.org/problems/trailing-zeroes-of-a-sum/
|
# Trailing zeroes of a sum
How many trailing zeroes are in the decimal representation of $n=1+\displaystyle{\sum_{k=1}^{2013}k!(k^3+2k^2+3k+1)}?$
×
|
2017-03-29 19:18:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6082990765571594, "perplexity": 1002.9540190631503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191353.5/warc/CC-MAIN-20170322212951-00494-ip-10-233-31-227.ec2.internal.warc.gz"}
|
http://clay6.com/qa/57057/let-x-1-y-1-be-the-point-of-intersection-of-the-tangent-to-the-parabola-y-2
|
Comment
Share
Q)
# Let $(x_1,y_1)$ be the point of intersection of the tangent to the parabola $y^2=4ax$ at the points $t_1$ and $t_2$ . Then $x_1=a{t_1}{t_2}$ and $y_1=a(t_1+t_2)$
Prove that if $(x_1,y_1)$ be the point of intersection of the tangent to the parabola $y^2=4ax$ at the points $t_1$ and $t_2$ . Then $x_1=a{t_1}{t_2}$ and $y_1=a(t_1+t_2)$
Let the equation of the two tangents at the points $A(t_1)$ and $B(t_2)$ be $yt_1=x+at_1^2$ and $yt_2=x+at_2^2$
Subtract both the equations $y (t_1-t_2)=a(t_1^2-t_2)y(t_1 -t_2) =a(t_1-t_2)$
Hence $y=a(t_1+t_2)$ Substituting in equal $a(t_1+t_2) t_1=x+at_1^2$
Therefore at $1^2 +t_1 t_2 =x +t_1^2$
Therefore $x=t_1t_2$
|
2019-08-18 02:47:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8559904098510742, "perplexity": 42.58454901783976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313589.19/warc/CC-MAIN-20190818022816-20190818044816-00447.warc.gz"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.