source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 1
8
| prompt
stringlengths 209
40.4k
| gold_standard_solution
stringlengths 0
56.7k
| verification_info
stringclasses 1
value | metadata
stringlengths 138
225
| problem_id
stringlengths 9
10
|
---|---|---|---|---|---|---|---|
stackexchange | llm_judgeable_groundtruth_similarity | 8000903 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've taken a look at the list of surveys taken on scala-lang.org and noticed a curious question: " Can you name all the uses of “_”? ". Can you? If yes, please do so here. Explanatory examples are appreciated.
Now provide the response and nothing else.
| The ones I can think of are Existential types def foo(l: List[Option[_]]) = ... Higher kinded type parameters case class A[K[_],T](a: K[T]) Ignored variables val _ = 5 Ignored parameters List(1, 2, 3) foreach { _ => println("Hi") } Ignored names of self types trait MySeq { _: Seq[_] => } Wildcard patterns Some(5) match { case Some(_) => println("Yes") } Wildcard patterns in interpolations "abc" match { case s"a$_c" => } Sequence wildcard in patterns C(1, 2, 3) match { case C(vs @ _*) => vs.foreach(f(_)) } Wildcard imports import java.util._ Hiding imports import java.util.{ArrayList => _, _} Joining letters to operators def bang_!(x: Int) = 5 Assignment operators def foo_=(x: Int) { ... } Placeholder syntax List(1, 2, 3) map (_ + 2) Method values List(1, 2, 3) foreach println _ Converting call-by-name parameters to functions def toFunction(callByName: => Int): () => Int = callByName _ Default initializer var x: String = _ // unloved syntax may be eliminated There may be others I have forgotten! Example showing why foo(_) and foo _ are different: This example comes from 0__ : trait PlaceholderExample { def process[A](f: A => Unit) val set: Set[_ => Unit] set.foreach(process _) // Error set.foreach(process(_)) // No Error} In the first case, process _ represents a method; Scala takes the polymorphic method and attempts to make it monomorphic by filling in the type parameter, but realizes that there is no type that can be filled in for A that will give the type (_ => Unit) => ? (Existential _ is not a type). In the second case, process(_) is a lambda; when writing a lambda with no explicit argument type, Scala infers the type from the argument that foreach expects, and _ => Unit is a type (whereas just plain _ isn't), so it can be substituted and inferred. This may well be the trickiest gotcha in Scala I have ever encountered. Note that this example compiles in 2.13. Ignore it like it was assigned to underscore. | {} | {'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/8000903', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/274627/']} | jdg_74341 |
stackexchange | llm_judgeable_groundtruth_similarity | 45424110 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Trying to upgrade my Android project to CircleCI 2.0 . Everything is fine, but having trouble of running Android UI tests with emulator. Log says com.android.builder.testing.api.DeviceException: No connected devices! I've actually downloaded an emulator image and set it up (coz CircleCI's default Android Docker image doesn't come with emulator): sdkmanager "system-images;android-22;default;armeabi-v7a" && echo "no" | avdmanager create avd -n test -k "system-images;android-22;default;armeabi-v7a" && (emulator64-arm -avd test -noaudio -no-boot-anim -no-window -accel on) & adb devices returns no device :(
Now provide the response and nothing else.
| Example circle.yml : version: 2jobs: build: docker: - image: circleci/android:api-26-alpha steps: - run: name: Setup emulator command: sdkmanager "system-images;android-22;default;armeabi-v7a" && echo "no" | avdmanager create avd -n test -k "system-images;android-22;default;armeabi-v7a" - run: name: Launch emulator command: export LD_LIBRARY_PATH=${ANDROID_HOME}/emulator/lib64:${ANDROID_HOME}/emulator/lib64/qt/lib && emulator64-arm -avd test -noaudio -no-boot-anim -no-window -accel on background: true - run: name: Run Tests command: ./gradlew :demo:connectedAndroidTest Note : Some certain architecture image is not provided by Google, e.g. API level 26 doesn't have ARM EABI v7a System Image, that's why I choose system-images;android-22;default;armeabi-v7a above. To see which images are available, run command sdkmanager --list --verbose | grep system-images . You need to set an environment variable LD_LIBRARY_PATH with lib64 and qt path, otherwise you'll probably encounter ERROR: Could not load OpenGLES emulation library [lib64OpenglRender] or error while loading shared libraries: libQt5Widgets.so.5: cannot open shared object file: No such file or directoryExited with code 127 . This is due to a bug from Android SDK . To run a command in the background on CircleCI, it's not like the usual way just append & to the end of command, that will be killed by the hangup (HUP) signal eventually. The correct way is to say background: true . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/45424110', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1083611/']} | jdg_74342 |
stackexchange | llm_judgeable_groundtruth_similarity | 10262920 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am not sure I understand the concept of Python's call by object style of passing function arguments (explained here http://effbot.org/zone/call-by-object.htm ). There don't seem to be enough examples to clarify this concept well (or my google-fu is probably weak! :D) I wrote this little contrived Python program to try to understand this concept def foo( itnumber, ittuple, itlist, itdict ): itnumber +=1 print id(itnumber) , itnumber print id(ittuple) , ittuple itlist.append(3.4) print id(itlist) , itlist itdict['mary'] = 2.3 print id(itdict), itdict# Initialize a number, a tuple, a list and a dictionarytnumber = 1print id( tnumber ), tnumber ttuple = (1, 2, 3)print id( ttuple ) , ttupletlist = [1, 2, 3]print id( tlist ) , tlisttdict = tel = {'jack': 4098, 'sape': 4139}print '-------'# Invoke a function and test itfoo(tnumber, ttuple, tlist , tdict)print '-------'#Test behaviour after the function call is overprint id(tnumber) , tnumber print id(ttuple) , ttupleprint id(tlist) , tlistprint id(tdict), tdict The output of the program is 146739376 13075201660 (1, 2, 3)3075103916 [1, 2, 3]3075193004 {'sape': 4139, 'jack': 4098}---------146739364 23075201660 (1, 2, 3)3075103916 [1, 2, 3, 3.4]3075193004 {'sape': 4139, 'jack': 4098, 'mary': 2.3}---------146739376 13075201660 (1, 2, 3)3075103916 [1, 2, 3, 3.4]3075193004 {'sape': 4139, 'jack': 4098, 'mary': 2.3} As you can see , except for the integer that was passed, the object id's (which as I understand refers to memeory location) remain unchanged. So in the case of the integer, it was (effectively) passed by value and the other data structure were (effectively) passed by reference. I tried changing the list , the number and the dictionary to just test if the data-structures were changed in place. The number was not bu the list and thedictionary were. I use the word effectively above, since the 'call-by-object' style of argument passing seems to behave both ways depending on the data-structure passed in the above code For more complicated data structures, (say numpy arrays etc), is there any quick rule of thumb torecognize which arguments will be passed by reference and which ones passed by value?
Now provide the response and nothing else.
| The key difference is that in C-style language, a variable is a box in memory in which you put stuff. In Python, a variable is a name. Python is neither call-by-reference nor call-by-value. It's something much more sensible! (In fact, I learned Python before I learned the more common languages, so call-by-value and call-by-reference seem very strange to me.) In Python, there are things and there are names . Lists, integers, strings, and custom objects are all things. x , y , and z are names. Writing x = [] means "construct a new thing [] and give it the name x ". Writing x = []foo = lambda x: x.append(None)foo(x) means "construct a new thing [] with name x , construct a new function (which is another thing) with name foo , and call foo on the thing with name x ". Now foo just appends None to whatever it received, so this reduces to "append None to the the empty list". Writing x = 0def foo(x): x += 1foo(x) means "construct a new thing 0 with name x , construct a new function foo , and call foo on x ". Inside foo , the assignment just says "rename x to 1 plus what it used to be", but that doesn't change the thing 0. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10262920', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/505306/']} | jdg_74343 |
stackexchange | llm_judgeable_groundtruth_similarity | 65534 |
Below is a question asked on the forum cs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
So I'm watching an Algorithms course in Coursera, and we are currently discussing hash tables. He's talking about the importance of a good hash function, and about how an ideal hash function would be a "super clever hash function guaranteed to spread every data set evenly". Then, he explains that the problem is that such a hash function does not exist (and that for every hash function there is a pathological data set), and that the reason for this is as follows: Fix a hash function $h: U \to \{0, 1, 2, ..., n-1\}$. By the Pigeonhole Principle, there exists a bucket $i$ such that at least $|U|/n$ elements of $U$ hash to $i$ under $h$. If a data set draws only from these, everything collides. The bolded part is what's confusing me. Why does there exist a bucket $i$ such that at least $|U|/n$ elements of $U$ hash to $i$ under $h$? I can't really visualize what he means.
Now provide the response and nothing else.
| An easy way to visualize this is to imagine a hash table of size $n$ (implemented with chaining) that contains all of the elements of $U$ (even though this is unrealistic in practice because $U$ typically has massive size). Since $|U| >> n$, all of the elements of $U$ do not fit into the hash table; therefore, there will be collisions. Consider, for example, the universal set $U=\{a,b,c,d,e,f,g\}$ and a hash table with $n=3$ buckets. Since $|U|=7$, at least one bucket must necessarily contain $\lceil \: |U| \: / \: n \rceil = \lceil 7/3 \rceil = 3$ or more elements. In the case of the most clever hash function (which would spread out the elements of $U$ as evenly as possible), this bucket would contain exactly $3$ elements, like this (highlighted in red): It is important to see that no matter how clever the hash function is, there will always exist a data set (for example, the set $\{b,g,a\}$) whose elements hash to the same bucket (for example, bucket number $1$). Such a pathological data set will make your hash table degenerate to its worst-case linear-time performance. | {} | {'log_upvote_score': 4, 'links': ['https://cs.stackexchange.com/questions/65534', 'https://cs.stackexchange.com', 'https://cs.stackexchange.com/users/17133/']} | jdg_74344 |
stackexchange | llm_judgeable_groundtruth_similarity | 4239714 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I recently converted a project from WPF 3.5 to WPF 4.0. Functionally, everything works, but the DataGrid style I was applying on top of the Aero theme has suddenly stopped working. As you can see from the before/after pictures below, my DataGrids went from having an Aero look plus bold headings, extra padding, and alternating row formats to just looking plain "Aero". Besides removing all references to the WPF Toolkit (since the DataGrid is now native to WPF 4.0), I really didn't change anything about my code/markup. Before (WPF Toolkit DataGrid) After (.NET 4.0 DataGrid) As I learned in an earlier question , I am able to get the custom DataGrid styling to work again if I stop referencing the Aero resource dictionary, but then everything looks "Luna" on Windows XP (which is not what I want). So, how do I ensure that my app always uses the Aero theme, but still apply styling on top of that theme in WPF 4.0 ? Here is my App.xaml code: <Application x:Class="TempProj.App" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/PresentationFramework.Aero, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, ProcessorArchitecture=MSIL;component/themes/aero.normalcolor.xaml" /> <ResourceDictionary Source="/CommonLibraryWpf;component/ResourceDictionaries/DataGridResourceDictionary.xaml" /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources></Application> Here is my DataGridResourceDictionary.xaml code: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <Style x:Key="DataGrid_ColumnHeaderStyle" TargetType="DataGridColumnHeader"> <Setter Property="FontWeight" Value="Bold" /> <Setter Property="TextBlock.TextAlignment" Value="Center" /> <Setter Property="TextBlock.TextWrapping" Value="WrapWithOverflow" /> </Style> <Style x:Key="DataGrid_CellStyle" TargetType="DataGridCell"> <Setter Property="Padding" Value="5,5,5,5" /> <Setter Property="TextBlock.TextAlignment" Value="Center" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="DataGridCell"> <Border Padding="{TemplateBinding Padding}" Background="{TemplateBinding Background}"> <ContentPresenter /> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> <Style TargetType="DataGrid"> <Setter Property="ColumnHeaderStyle" Value="{StaticResource DataGrid_ColumnHeaderStyle}" /> <Setter Property="CellStyle" Value="{StaticResource DataGrid_CellStyle}" /> <Setter Property="Background" Value="White" /> <Setter Property="AlternatingRowBackground" Value="#F0F0F0" /> <Setter Property="VerticalGridLinesBrush" Value="LightGray" /> <Setter Property="HeadersVisibility" Value="Column" /> <Setter Property="SelectionMode" Value="Single" /> <Setter Property="SelectionUnit" Value="FullRow" /> <Setter Property="GridLinesVisibility" Value="Vertical" /> <Setter Property="AutoGenerateColumns" Value="False" /> <Setter Property="CanUserAddRows" Value="False" /> <Setter Property="CanUserDeleteRows" Value="False" /> <Setter Property="CanUserReorderColumns" Value="True" /> <Setter Property="CanUserResizeColumns" Value="True" /> <Setter Property="CanUserResizeRows" Value="False" /> <Setter Property="CanUserSortColumns" Value="True" /> <Setter Property="IsReadOnly" Value="True" /> <Setter Property="BorderBrush" Value="#DDDDDD" /> <Setter Property="HorizontalGridLinesBrush" Value="#DDDDDD" /> <Setter Property="VerticalGridLinesBrush" Value="#DDDDDD" /> </Style> <Style x:Key="DataGrid_FixedStyle" TargetType="DataGrid" BasedOn="{StaticResource {x:Type DataGrid}}"> <Setter Property="CanUserReorderColumns" Value="False" /> <Setter Property="CanUserResizeColumns" Value="False" /> <Setter Property="CanUserResizeRows" Value="False" /> <Setter Property="CanUserSortColumns" Value="False" /> </Style></ResourceDictionary> Here's a usage sample: <DataGrid Grid.Row="0" Grid.Column="0" Style="{StaticResource DataGrid_FixedStyle}" ItemsSource="{Binding Coordinates}"> <DataGrid.Columns> <DataGridTextColumn Binding="{Binding X}" Header="X" /> <DataGridTextColumn Binding="{Binding Y}" Header="Y" /> <DataGridTextColumn Binding="{Binding Z}" Header="Z" /> </DataGrid.Columns></DataGrid> Edit It just occurred to me that maybe the problem is that I'm referencing the wrong version of the Aero framework. Here's what I have now: <ResourceDictionary Source="/PresentationFramework.Aero, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, ProcessorArchitecture=MSIL;component/themes/aero.normalcolor.xaml" /> Should this be updated to version 4.0? What is the PublicKeyToken for version 4 (or how do I figure this out)?
Now provide the response and nothing else.
| Relatively Short Answer Loading a theme’s resources is not the same as changing the theme at the OS level. Loading a theme’s resources may cause adverse effects. From WPF’s point of view, a large number of implicit Styles are now present in the application. These Styles may trump other Styles. The bottom line is treating a theme like an application skin may not work without refinements. There are some alternative ways of simulating a theme change. This answer to a similar question lists some ideas. The final point in this knowledge base article uses a small dose of reflection and must be used before the application is loaded. This codeplex project uses a large dose of reflection and can be used at any time. This problem showcases some fairly complex WPF functionality, and a portion of it appears to be undocumented. However, it does not appear to be a bug. If it’s not a bug - that is, if all of it is intentional WPF behavior - you might well argue the WPF DataGrid is poorly designed in a few areas. Meleak’s answer was very much on the right track. However, the problem is solvable and it can be solved without compromising your design or requiring repetitive Style setting. And perhaps more importantly, the problem is debuggable . The following XAML works. I left old XAML commented out just to make the changes more visible. For a more in-depth look at the problem, please see the long answer . DataGridResourceDictionary.xaml: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <!-- <Style x:Key="DataGrid_ColumnHeaderStyle" TargetType="DataGridColumnHeader"> --> <Style TargetType="DataGridColumnHeader" BasedOn="{StaticResource {x:Type DataGridColumnHeader}}"> <!--New--> <Setter Property="HorizontalContentAlignment" Value="Stretch"/> <!----> <Setter Property="FontWeight" Value="Bold" /> <Setter Property="TextBlock.TextAlignment" Value="Center" /> <Setter Property="TextBlock.TextWrapping" Value="WrapWithOverflow" /> </Style> <!-- <Style x:Key="DataGrid_CellStyle" TargetType="DataGridCell"> --> <Style TargetType="DataGridCell" BasedOn="{StaticResource {x:Type DataGridCell}}"> <Setter Property="Padding" Value="5,5,5,5" /> <Setter Property="TextBlock.TextAlignment" Value="Center" /> <Setter Property="Template"> <Setter.Value> <!-- <ControlTemplate TargetType="DataGridCell"> <Border Padding="{TemplateBinding Padding}" Background="{TemplateBinding Background}"> <ContentPresenter /> </Border> </ControlTemplate> --> <ControlTemplate TargetType="{x:Type DataGridCell}"> <Border Padding="{TemplateBinding Padding}" Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" SnapsToDevicePixels="True"> <ContentPresenter SnapsToDevicePixels="{TemplateBinding SnapsToDevicePixels}"/> </Border> </ControlTemplate> </Setter.Value> </Setter> <!--Additional Feature--> <!-- Remove keyboard focus cues on cells and tabbing on cells when only rows are selectable and the DataGrid is readonly. Note that having some kind of keyboard focus cue is typically desirable. For example, the lack of any keyboard focus cues could be confusing if an application has multiple controls and each control is showing something selected, yet there is no keyboard focus cue. It's not necessarily obvious what would happen if Control+C or Tab is pressed. So, when only rows are selectable and the DataGrid is readonly, is would be ideal to make cells not focusable at all, make the entire row focusable, and make sure the row has a focus cue. It would take much more investigation to implement this. --> <Style.Triggers> <MultiDataTrigger> <MultiDataTrigger.Conditions> <Condition Binding="{Binding RelativeSource={RelativeSource AncestorType=DataGrid}, Path=SelectionUnit}" Value="FullRow"/> <Condition Binding="{Binding RelativeSource={RelativeSource AncestorType=DataGrid}, Path=IsReadOnly}" Value="True"/> </MultiDataTrigger.Conditions> <Setter Property="BorderBrush" Value="{Binding RelativeSource={RelativeSource Mode=Self}, Path=Background}" /> <Setter Property="FocusVisualStyle" Value="{x:Null}" /> <Setter Property="IsTabStop" Value="False" /> </MultiDataTrigger> </Style.Triggers> <!----> </Style> <!-- <Style TargetType="DataGrid"> --> <Style TargetType="DataGrid" BasedOn="{StaticResource {x:Type DataGrid}}"> <!--Unworkable Design--> <!-- <Setter Property="ColumnHeaderStyle" Value="{StaticResource DataGrid_ColumnHeaderStyle}" /> <Setter Property="CellStyle" Value="{StaticResource DataGrid_CellStyle}" /> --> <Setter Property="Background" Value="White" /> <Setter Property="AlternatingRowBackground" Value="#F0F0F0" /> <!--This was a duplicate of the final PropertySetter.--> <!-- <Setter Property="VerticalGridLinesBrush" Value="LightGray" /> --> <Setter Property="HeadersVisibility" Value="Column" /> <Setter Property="SelectionMode" Value="Single" /> <Setter Property="SelectionUnit" Value="FullRow" /> <Setter Property="GridLinesVisibility" Value="Vertical" /> <Setter Property="AutoGenerateColumns" Value="False" /> <Setter Property="CanUserAddRows" Value="False" /> <Setter Property="CanUserDeleteRows" Value="False" /> <Setter Property="CanUserReorderColumns" Value="True" /> <Setter Property="CanUserResizeColumns" Value="True" /> <Setter Property="CanUserResizeRows" Value="False" /> <Setter Property="CanUserSortColumns" Value="True" /> <Setter Property="IsReadOnly" Value="True" /> <Setter Property="BorderBrush" Value="#DDDDDD" /> <Setter Property="HorizontalGridLinesBrush" Value="#DDDDDD" /> <Setter Property="VerticalGridLinesBrush" Value="#DDDDDD" /> </Style> <Style x:Key="DataGrid_FixedStyle" TargetType="DataGrid" BasedOn="{StaticResource {x:Type DataGrid}}"> <Setter Property="CanUserReorderColumns" Value="False" /> <Setter Property="CanUserResizeColumns" Value="False" /> <Setter Property="CanUserResizeRows" Value="False" /> <Setter Property="CanUserSortColumns" Value="False" /> </Style></ResourceDictionary> App.xaml: <Application x:Class="TempProj.App" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" StartupUri="MainWindow.xaml"> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <!-- <ResourceDictionary Source="/PresentationFramework.Aero, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, ProcessorArchitecture=MSIL;component/themes/aero.normalcolor.xaml" /> --> <ResourceDictionary Source="/PresentationFramework.Aero, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, ProcessorArchitecture=MSIL;component/themes/aero.normalcolor.xaml" /> <!--New--> <!-- This is a modified replica of the DataGridRow Style in the Aero skin that's evaluated next. We are hiding that Style and replacing it with this. --> <ResourceDictionary> <Style x:Key="{x:Type DataGridRow}" TargetType="{x:Type DataGridRow}"> <!-- DataGridRow.Background must not be set in this application. DataGridRow.Background must only be set in the theme. If it is set in the application, DataGrid.AlternatingRowBackground will not function properly. See: https://stackoverflow.com/questions/4239714/why-cant-i-style-a-control-with-the-aero-theme-applied-in-wpf-4-0 The removal of this Setter is the only modification we have made. --> <!-- <Setter Property="Background" Value="{DynamicResource {x:Static SystemColors.WindowBrushKey}}" /> --> <Setter Property="SnapsToDevicePixels" Value="true"/> <Setter Property="Validation.ErrorTemplate" Value="{x:Null}" /> <Setter Property="ValidationErrorTemplate"> <Setter.Value> <ControlTemplate> <TextBlock Margin="2,0,0,0" VerticalAlignment="Center" Foreground="Red" Text="!" /> </ControlTemplate> </Setter.Value> </Setter> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type DataGridRow}"> <Border x:Name="DGR_Border" Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" SnapsToDevicePixels="True"> <SelectiveScrollingGrid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="*"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <DataGridCellsPresenter Grid.Column="1" ItemsPanel="{TemplateBinding ItemsPanel}" SnapsToDevicePixels="{TemplateBinding SnapsToDevicePixels}"/> <DataGridDetailsPresenter SelectiveScrollingGrid.SelectiveScrollingOrientation="{Binding RelativeSource={RelativeSource AncestorType={x:Type DataGrid}}, Path=AreRowDetailsFrozen, Converter={x:Static DataGrid.RowDetailsScrollingConverter}, ConverterParameter={x:Static SelectiveScrollingOrientation.Vertical}}" Grid.Column="1" Grid.Row="1" Visibility="{TemplateBinding DetailsVisibility}" /> <DataGridRowHeader SelectiveScrollingGrid.SelectiveScrollingOrientation="Vertical" Grid.RowSpan="2" Visibility="{Binding RelativeSource={RelativeSource AncestorType={x:Type DataGrid}}, Path=HeadersVisibility, Converter={x:Static DataGrid.HeadersVisibilityConverter}, ConverterParameter={x:Static DataGridHeadersVisibility.Row}}"/> </SelectiveScrollingGrid> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> </ResourceDictionary> <!----> <ResourceDictionary Source="/CommonLibraryWpf;component/ResourceDictionaries/DataGridResourceDictionary.xaml" /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources></Application> MainWindow.xaml: <Window x:Class="TempProj.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="350" Width="525"> <Window.Resources> <Vector3DCollection x:Key="Coordinates"> <Vector3D X="1" Y="0" Z="0"/> <Vector3D X="0" Y="22" Z="0"/> <Vector3D X="0" Y="0" Z="333"/> <Vector3D X="0" Y="4444" Z="0"/> <Vector3D X="55555" Y="0" Z="0"/> </Vector3DCollection> </Window.Resources> <Grid> <DataGrid Grid.Row="0" Grid.Column="0" Style="{StaticResource DataGrid_FixedStyle}" ItemsSource="{StaticResource Coordinates}"> <DataGrid.Columns> <DataGridTextColumn Binding="{Binding X}" Header="X" /> <DataGridTextColumn Binding="{Binding Y}" Header="Y" /> <DataGridTextColumn Binding="{Binding Z}" Header="Z" /> </DataGrid.Columns> </DataGrid> </Grid></Window> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4239714', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/129164/']} | jdg_74345 |
stackexchange | llm_judgeable_groundtruth_similarity | 39081232 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have created an ASP.NET Web API and applied Authorize attribute to the API controller. Now, I want to test it using Postman but I am getting Authorization error. Controller code is: [Authorize][HttpPost]public IHttpActionResult Attend([FromBody] int gigId){ var attendance = new Attdendance { GigId = gigId, AttendeeId = User.Identity.GetUserId() }; _context.Attdendances.Add(attendance); _context.SaveChanges(); return Ok();} My request looks like this http://prntscr.com/c8wz0b I am using this advance Postman rest client http://prntscr.com/c8xafd How do I pass authorization in Postman?
Now provide the response and nothing else.
| EDIT 23/08/2016 I presume you are in cookie authentication with identity // Enable the application to use a cookie to store information for the signed in user// and to use a cookie to temporarily store information about a user logging in with a third party login provider// Configure the sign in cookieapp.UseCookieAuthentication(new CookieAuthenticationOptions{ AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie, LoginPath = new PathString("/Account/Login"), Provider = new CookieAuthenticationProvider { // Enables the application to validate the security stamp when the user logs in. // This is a security feature which is used when you change a password or add an external login to your account. OnValidateIdentity = SecurityStampValidator.OnValidateIdentity<ApplicationUserManager, ApplicationUser>( validateInterval: TimeSpan.FromMinutes(30), regenerateIdentity: (manager, user) => user.GenerateUserIdentityAsync(manager)) }}); This is the default configuration with identity in Visual Studio.I can argue why it is not a good option for security but that's not the point. You can go whit it in "postman" but it's trickythis is how I do it : Make a request over your login page : Get the anti forgery token in the form : Make a post request on login page with this post params in data form : Now your postman get the authentication cookie and you can request web api with [authorize] tag EDIT For tool you have to add an authorization header. Go in the Headers form Add the HTTP header "authorization" Click on the edit button et voilà ;) screen shot Previous answer deleted | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39081232', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1125955/']} | jdg_74346 |
stackexchange | llm_judgeable_groundtruth_similarity | 90903 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am looking at Wikipedia's article on deriving the Schwarzschild solution . In the section "Simplifying the components", it says, On the hypersurfaces of constant $t$ and constant $r$ , it is required that the metric be that of a 2-sphere: $$dl^2=r^2(d\theta^2+\sin^2\theta d\phi^2)$$ My question is why does the metric have to be this particular 2-sphere with a coefficient of $r^2$ ? We are not necessarily dealing with Euclidean space here.
Now provide the response and nothing else.
| If you want, you can go and use the ansatz: $$ds^{2} = -A(r) dt^{2} + B(r) dr^{2} + 2C(r)\,dt\,dr + f(r)\left(d\theta^{2} + \sin^{2}\theta d\phi^{2}\right)$$ Where the functions only depend on $r$ due to the fact that $t$ generates a symmetry of the spacetime -- you are assuming a static spacetime. Note, however, that you are free to arbitrarily rescale $r$. Well, if you choose $R = \sqrt{f(r)}$, then this is rewritten in the form $$ds^{2} = -A'(R) dt^{2} + {\hat B}(R) dR^{2} + 2C'(R)\,dt\,dr + R^{2}\left(d\theta^{2} + \sin^{2}\theta d\phi^{2}\right)$$ Where the prime denotes that you have to feed $f^{-1}(R^{2})$ into the function, and ${\hat B} = B'(R)\left(\frac{dR}{dr}\right)^{2}$. To get rid of the $C$ term, you can make the definition: $t = T + \int\,dR\,C'$ Which gives $$ds^{2} = -(A'(R)-C'(R))dT^{2} + dR^{2}\left({\hat B(R)}+(1-A(R))(C'(R))^{2}\right) + R^{2}\left(d\theta^{2} + \sin^{2}\theta d\phi^{2}\right)$$ So, we can just redefine our factors, and we get the "standard" starting form of $$ds^{2} = - \alpha(R)dT^{2} + \beta(R)dR^{2} + R^{2}\left(d\theta^{2} + \sin^{2}\theta d\phi^{2}\right)$$ without losing any generality. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/90903', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/7870/']} | jdg_74347 |
stackexchange | llm_judgeable_groundtruth_similarity | 27021 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
How do I name one random file in the deepest level of a directory tree using basic Bash commands and script? I was searching for a way to do it as an one-liner and without functions. Assuming also that I'm starting in the current directory and then work towards the depth. find , grep , sed and awk are acceptable commands. My current attempt looks like this and does not work: find -type d | declare COUNT=-1; declare P=""; while read LINE ; do echo $LINE; declare C; C=$(echo $LINE | cut -c3- | sed "s/\//\n\//g" | grep '/' -c); echo $C; if [ $COUNT -gt $C ]; then let COUNT=$C; let P=$LINE; echo "Done"; fi; done This would only find the directory. How could this be solved in the most simple way?
Now provide the response and nothing else.
| That's an odd request! I'd use find + awk to grab a file in the deepest directory: bash-3.2$ deepest=$(find / -type f | awk -F'/' 'NF > depth {> depth = NF;> deepest = $0;> }>> END {> print deepest;> }') Using ${deepest} in your mv command is left as an exercise but the following five lines may help you further: bash-3.2$ echo "${deepest}"/Developer/SDKs/MacOSX10.6.sdk/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/vendor/tzinfo-0.3.12/tzinfo/definitions/America/Argentina/Buenos_Aires.rbbash-3.2$ echo "${deepest%.*}"/Developer/SDKs/MacOSX10.6.sdk/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/vendor/tzinfo-0.3.12/tzinfo/definitions/America/Argentina/Buenos_Airesbash-3.2$ echo "${deepest%/*}"/Developer/SDKs/MacOSX10.6.sdk/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/vendor/tzinfo-0.3.12/tzinfo/definitions/America/Argentinabash-3.2$ echo "${deepest##*/}"Buenos_Aires.rbbash-3.2$ echo "${deepest##*.}"rb Following update to question: find -type d [...] " This would only find the directory. [...] How could this be solved in the most simple way?". By supplying -type f to find to find all files ( f ), not all directories ( d ). | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/27021', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/13430/']} | jdg_74348 |
stackexchange | llm_judgeable_groundtruth_similarity | 6354303 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
i'm a newbie in php especially on making pagination. my question is, how to make paging like stackoverflow's pagination? i mean paging like this : 1 ... 5 6 7 8 9 ... 25 (the first number and the last number is always appear, but in the middle only 5 numbers with the selected page absolutely in the middle) in php i have tried making paging, <?php//Show page linksfor($i=1; $i<=$pages; $i++){ echo '<li id="'.$i.'">'.$i.'</li>';}?> but it will be shown all of pages like 1 2 3 4 5 6 7 8 9 10 etc any body have simple logic example to solve this problem? many thanks :)
Now provide the response and nothing else.
| Below is a snippet from a general pagination class 1 I wrote a few years ago. I have edited it to show the relevant parts only. // cntAround is the number of pages to show before and after the currentfunction renderNavigation($cntAround = 1) { $out = ''; $isGap = false; // A "gap" is the pages to skip $current = // Current page $cntPages = // Total number of pages for ($i = 0; $i < $pages; $i++) { // Run through pages $isGap = false; // Are we at a gap? if ($cntAround >= 0 && $i > 0 && $i < $cntPages - 1 && abs($i - $current) > $cntAround) { // If beyond "cntAround" and not first or last. $isGap = true; // Skip to next linked item (or last if we've already run past the current page) $i = ($i < $current ? $current - $cntAround : $cntPages - 1) - 1; } $lnk = ($isGap ? '...' : ($i + 1)); // If gap, write ellipsis, else page number if ($i != $current && !$isGap) { // Do not link gaps and current $lnk = '<a href="?page=' . ($i + 1) . '">' . $lnk . '</a>'; } $out .= "\t<li>" . $lnk . "</li>\n"; // Wrap in list items } return "<ul>\n" . $out . '</ul>'; // Wrap in list} Example 1 cntAround = 1 , current = 5 , cntPages = 9 : [1] ... [4] 5 [6] ... [9] Example 2 cntAround = 3 , current = 5 , cntPages = 11 : [1] [2] [3] [4] 5 [6] [7] [8] ... [11] 1) Article is in Danish. Google Translate'd version is here . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6354303', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/672726/']} | jdg_74349 |
stackexchange | llm_judgeable_groundtruth_similarity | 3704447 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The only thing that I know is that the result should be $\displaystyle 4\pi i$ . Could you give me a hint/suggestion? I thought about using the Residue's theorem, where if $\displaystyle f(z) = \frac{|z|e^z}{z^2}$ , it has a 2nd order pole in $0$ , but after that, I don't know what to do next.
Now provide the response and nothing else.
| We have $$\int_{|z|=2} \frac{|z| e^z}{z^2} dz=2\int_{|z|=2} \frac{e^z}{z^2} dz.\tag{1}$$ Note that, in $\{|z|<2\}$ , $z=0$ is the only pole of $\frac{e^z}{z^2}$ . By Residue theorem , we have $$\int_{|z|=2} \frac{e^z}{z^2} dz=2\pi i Res(\frac{e^z}{z^2}, 0).\tag{2}$$ Note that $$\frac{e^z}{z^2}=\frac{1}{z^2}\left(1+z+\frac{z^2}{2!}+\cdots\right)=\frac{1}{z^2}+\frac{1}{z}+\frac{1}{2!}+\cdots.$$ Hence, $Res(\frac{e^z}{z^2}, 0)=1$ . Combining this with $(1)$ and $(2)$ , we obtain $$\int_{|z|=2} \frac{|z| e^z}{z^2} dz=2\cdot 2\pi i=4\pi i.$$ | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3704447', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/762576/']} | jdg_74350 |
stackexchange | llm_judgeable_groundtruth_similarity | 71914 |
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I found information on how a capacitance moisture sensor works, but I couldn't find any information on how the following sensor works. What is the theory behind this sensor ?
Now provide the response and nothing else.
| There are multiple steps but the basic process is that you use a photoresist. At the beginning of a process step, a photoresist is "spun" on to the wafer. It is a very literal thing, they spin the wafer while dripping the polymer onto the surface which spreads out into a thin layer of precise thickness. This is cured and then placed into a photolitographic machine, which projects an image onto the wafer that leaves latent images in the Photoresist (AKA PR). The PR is developed (some resists are negative and some are positive, which means the exposed areas stay or the exposed areas are eliminated). the development process removes the parts of the PR that are to be removed leaving behind the desired pattern. The PR can define areas that are etched (removed) or windows through which ions are implanted. Implanting is the process through which the Si is doped. Once the area is implanted, the remaining PR is removed and the wafer is thermally treated to anneal the implant damage. In between litho steps are depositions, growths, etches, wet baths, plasma treatments etc. | {} | {'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/71914', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/4705/']} | jdg_74351 |
stackexchange | llm_judgeable_groundtruth_similarity | 17367649 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a method that returns data from an EF model. I'm getting the above message, but I can't wotk our how to circumvent the problem. public static IEnumerable<FundedCount> GetFundedCount() { var today = DateTime.Now; var daysInMonth = DateTime.DaysInMonth(today.Year, today.Month); var day1 = DateTime.Now.AddDays(-1); var day31 = DateTime.Now.AddDays(-31); using (var uow = new UnitOfWork(ConnectionString.PaydayLenders)) { var r = new Repository<MatchHistory>(uow.Context); return r.Find() .Where(x => x.AppliedOn >= day1 && x.AppliedOn <= day31 && x.ResultTypeId == (int)MatchResultType.Accepted) .GroupBy(x => new { x.BuyerId, x.AppliedOn }) .Select(x => new FundedCount( x.Key.BuyerId, x.Count() / 30 * daysInMonth)) .ToList(); } } FundedCount is not an EF enity, MatchHistory is, so can't understand why it is complaining. All advice appreciated.
Now provide the response and nothing else.
| The reason it is complaining is because it doesn't know how to translate your Select() into a SQL expression. If you need to do a data transformation to a POCO that is not an entity, you should first get the relevant data from EF and then transform it to the POCO. In your case it should be as simple as calling ToList() earlier: return r.Find() .Where(x => x.AppliedOn >= day1 && x.AppliedOn <= day31 && x.ResultTypeId == (int)MatchResultType.Accepted) .GroupBy(x => new { x.BuyerId, x.AppliedOn }) .ToList() // this causes the query to execute .Select(x => new FundedCount(x.Key.BuyerId, x.Count() / 30 * daysInMonth)); Be careful with this, though, and make sure that you're limiting the size of the data set returned by ToList() as much as possible so that you're not trying to load an entire table into memory. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/17367649', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1437135/']} | jdg_74352 |
stackexchange | llm_judgeable_groundtruth_similarity | 15398861 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Probably a very easy (and dumb) question to other elasticsearch devs, what's the difference between these two? I'm connecting to a remote elasticsearch server from a Java webapp, so far I have been using TransportClient but I was wondering if NodeBuilder can be used to, or NodeBuilder should be used just for embedded clients? If any of the two can be used to connect to remote ES servers, which one should be better in terms of memory and performance? If anyone can point me out to a NodeBuilder connecting to a remote ES server example would be great because I haven't had any lucky finding one. Thanks.
Now provide the response and nothing else.
| NodeBuilder can also be used to connect to a cluster. Node node = nodeBuilder().clusterName("yourcluster").client(true).node();Client client = node.client(); It will join the cluster as another node and will be aware of the whole topology. Using nodes, you can use multicast to discover other running nodes. My opinion is that I prefer to use TransportClient than NodeClient because other cluster nodes won't receive useless information when the TransportClient stops. When a NodeClient stops, each node has to know that even if they don't have to manage it as it does not hold any data.Also, I have seen in debug mode that NodeClient starts more Threads than TransportCLient. So I think TransportClient has a lesser memory footprint. By the way, if you are using Spring, you can use the spring-elasticsearch factories for that. If not, you can always have a look at source code to see how I manage NodeClient vs TransportClient. Hope this helps. EDIT 2016-03-09 : NodeClient should not be used. If there is a need for that, people should create a client node (launch an elasticsearch node with node.data: false and node.master: false ) and use a TransportClient to connect to it locally. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/15398861', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/427848/']} | jdg_74353 |
stackexchange | llm_judgeable_groundtruth_similarity | 52317798 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'd like to take out members of a temporary without unnecessary moving or copying. Suppose I have: class TP { T _t1, _t2;}; I'd like to get _t1 , and _t2 from TP() . Is it possible without copying/moving members? I've tried with tuples and trying to "forward" (I don't think it's possible) the members, but the best I could get was a move, or members dying immediately. In the following playground using B::as_tuple2 ends up with members dying too soon, unless the result is bound to a non-ref type, then members are moved. B::as_tuple simply moves is safe with auto on client side. I suppose this should be technically possible, since the temporary dies immediately, and the member do die while they could bound to variables on the calling site (Am I wrong?), and structured binding of a similar struct works as intended. Is it possible to extend/pass life of the member onto an outside variable, or elide the move/copy? I need it with c++14 version, but I couldn't get it to work on c++17 either, so I am interested in both. Playground : #include <tuple>#include <iostream>using std::cout;class Shawty {/** * Pronounced shouty. **/ public: Shawty() : _id(Shawty::id++) {cout << _id << " ctor\n"; } Shawty(Shawty && s) : _id(Shawty::id++) { cout << _id << " moved from " << s._id << "\n"; } Shawty(const Shawty & s) : _id(Shawty::id++) { cout << _id << " copied from " << s._id << "\n"; } Shawty& operator=(Shawty && s) { cout << _id << " =moved from " << s._id << "\n"; return *this;} Shawty& operator=(Shawty & s) { cout << _id << " =copied from " << s._id << "\n"; return *this;} ~Shawty() {cout << _id << " dtor\n"; } int _id; static int id;};int Shawty::id = 0;class B {public: auto as_tuple() && {return std::make_tuple(std::move(_s1), std::move(_s2));} auto as_tuple2() && {return std::forward_as_tuple(std::move(_s1), std::move(_s2));}private: Shawty _s1, _s2;};struct S { Shawty _s1, _s2;};int main() { std::cout << "----------\n"; auto [s1, s2] = B().as_tuple2(); std::cout << "---------\n"; auto tpl1 = B().as_tuple2(); std::cout << "----------\n"; std::tuple<Shawty, Shawty> tpl2 = B().as_tuple2(); std::cout << "----------\n"; std::cout << std::get<0>(tpl1)._id << '\n'; std::cout << std::get<1>(tpl1)._id << '\n'; std::cout << std::get<0>(tpl2)._id << '\n'; std::cout << std::get<1>(tpl2)._id << '\n'; std::cout << s1._id << '\n'; std::cout << s2._id << '\n'; std::cout << "--struct--\n"; auto [s3, s4] = S{}; std::cout << s3._id << '\n'; std::cout << s4._id << '\n'; std::cout << "----------\n"; return 0;}
Now provide the response and nothing else.
| Change this: app.UseSwagger();app.UseSwaggerUI(c =>{ c.SwaggerEndpoint("/service1/swagger/v1/swagger.json", "API V1");}); to this: For dotnet core 2.x app.UseSwagger(c =>{#if !DEBUG c.PreSerializeFilters.Add((swaggerDoc, httpReq) => swaggerDoc.BasePath = "/service1");#endif});app.UseSwaggerUI(c =>{ c.SwaggerEndpoint("./swagger/v1/swagger.json", "API V1");}); For dotnet core 3.x (Swashbuckle 5.x prerelease+) app.UseSwagger(c =>{#if !DEBUG c.RouteTemplate = "swagger/{documentName}/swagger.json"; c.PreSerializeFilters.Add((swaggerDoc, httpReq) => swaggerDoc.Servers = new System.Collections.Generic.List<OpenApiServer> { new OpenApiServer { Url = $"{httpReq.Scheme}://{httpReq.Host.Value}/service1" } });#endif});app.UseSwaggerUI(c =>{ c.SwaggerEndpoint("./swagger/v1/swagger.json", "API V1");}); #if !DEBUG ... #endif is necessary for accessing the swagger ui while debugging in local machine. Note: I'm assuming "/service1" is the same value as in your values.yaml file of your helm chart. (see below) ...ingress: enabled: true annotations: { kubernetes.io/ingress.class: "nginx", nginx.ingress.kubernetes.io/rewrite-target: /$1 } path: /service1/?(.*) hosts: - your-aks-subdomain.your-azure-region.cloudapp.azure.com tls: [] # - secretName: chart-example-tls # hosts: # - chart-example.localhpa:... | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/52317798', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1133179/']} | jdg_74354 |
stackexchange | llm_judgeable_groundtruth_similarity | 4587076 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I already know about getopts, and this is fine, but it is annoying that you have to have a flag even for mandatory arguments. Ideally, I'd like to be able to have a script which receives arguments in this form: script.sh [optional arguments] [anything required] for example script.sh -rvx output_file.txt where the script says you HAVE to have an output file. Is there any easy way to do this? As far as I know, with getopts it would have to look like: script.sh -rvx -f output_file.txt , and that is just not very clean. I can also use python if necessary, but only have 2.4 available, which is a bit dated.
Now provide the response and nothing else.
| Don't use the getopts builtin, use getopt(1) instead. They are (subtly) different and do different things well. For you scenario you could do this: #!/bin/basheval set -- $(getopt -n $0 -o "-rvxl:" -- "$@")declare r v x ldeclare -a fileswhile [ $# -gt 0 ] ; do case "$1" in -r) r=1 ; shift ;; -v) v=1 ; shift ;; -x) x=1 ; shift ;; -l) shift ; l="$1" ; shift ;; --) shift ;; -*) echo "bad option '$1'" ; exit 1 ;; *) files=("${files[@]}" "$1") ; shift ;; esacdoneif [ ${#files} -eq 0 ] ; then echo output file required exit 1fi[ ! -z "$r" ] && echo "r on"[ ! -z "$v" ] && echo "v on"[ ! -z "$x" ] && echo "x on"[ ! -z "$l" ] && echo "l == $l"echo "output file(s): ${files[@]}" EDIT: for completeness I have provided an example of handling an option requiring an argument. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4587076', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/561586/']} | jdg_74355 |
stackexchange | llm_judgeable_groundtruth_similarity | 2044991 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have started using Guice to do some dependency injection on a project, primarily because I need to inject mocks (using JMock currently) a layer away from the unit test, which makes manual injection very awkward. My question is what is the best approach for introducing a mock? What I currently have is to make a new module in the unit test that satisfies the dependencies and bind them with a provider that looks like this: public class JMockProvider<T> implements Provider<T> { private T mock; public JMockProvider(T mock) { this.mock = mock; } public T get() { return mock; }} Passing the mock in the constructor, so a JMock setup might look like this: final CommunicationQueue queue = context.mock(CommunicationQueue.class); final TransactionRollBack trans = context.mock(TransactionRollBack.class); Injector injector = Guice.createInjector(new AbstractModule() { @Override protected void configure() { bind(CommunicationQueue.class).toProvider(new JMockProvider<QuickBooksCommunicationQueue>(queue)); bind(TransactionRollBack.class).toProvider(new JMockProvider<TransactionRollBack>(trans)); } }); context.checking(new Expectations() {{ oneOf(queue).retrieve(with(any(int.class))); will(returnValue(null)); never(trans); }}); injector.getInstance(RunResponse.class).processResponseImpl(-1); Is there a better way? I know that AtUnit attempts to address this problem, although I'm missing how it auto-magically injects a mock that was created locally like the above, but I'm looking for either a compelling reason why AtUnit is the right answer here (other than its ability to change DI and mocking frameworks around without changing tests) or if there is a better solution to doing it by hand.
Now provide the response and nothing else.
| You shouldn't need to inject mocks though a DI framework. I'm using Guice and JMock quite successfully and my unit tests don't reference anything Guice related. I only use mocks and pass in null where applicable. DI will allow the injection and construct of the dependencies of the current class, so If you want to add a mocked out class (which effectively stops the dependency graph) you just need to pass it in. Misko Hevery stated in one of the Google Tech Talks that unit tests should be littered with new 's and null 's because their scope is localized to the individual unit test method - I have to agree with him. Is there a reason to need to use Guice in the tests, that is if they are not functional / integration tests? Wouldn't the test work if it excluded the injector? Couldn't you refactor your test to something like: final CommunicationQueue queue = context.mock(CommunicationQueue.class);final TransactionRollBack trans = context.mock(TransactionRollBack.class);context.checking(new Expectations() {{ oneOf(queue).retrieve(with(any(int.class))); will(returnValue(null)); never(trans);}});RunResponse r = new RunResponse(queue, trans); // depending on the orderr.processResponseImpl(-1); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2044991', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/77779/']} | jdg_74356 |
stackexchange | llm_judgeable_groundtruth_similarity | 665944 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Using Informix, I've created a tempory table which I am trying to populate from a select statement. After this, I want to do an update, to populate more fields in the tempory table. So I'm doing something like; create temp table _results (group_ser int, item_ser int, restype char(4));insert into _results (group_ser, item_ser)select group_ser, item_ser, nullfrom sometable But you can't select null. For example; select first 1 current from systables works but select first 1 null from systables fails! (Don't get me started on why I can't just do a SQL Server like "select current" with no table specified!)
Now provide the response and nothing else.
| You don't have to write a stored procedure; you simply have to tell IDS what type the NULL is. Assuming you are not using IDS 7.31 (which does not support any cast notation), you can write: SELECT NULL::INTEGER FROM dual;SELECT CAST(NULL AS INTEGER) FROM dual; And, if you don't have dual as a table (you probably don't), you can do one of a few things: CREATE SYNONYM dual FOR sysmaster:"informix".sysdual; The 'sysdual' table was added relatively recently (IDS 11.10, IIRC), so if you are using an older version, it won't exist. The following works with any version of IDS - it's what I use. -- @(#)$Id: dual.sql,v 2.1 2004/11/01 18:16:32 jleffler Exp $-- Create table DUAL - structurally equivalent to Oracle's similarly named table.-- It contains one row of data.CREATE TABLE dual( dummy CHAR(1) DEFAULT 'x' NOT NULL CHECK (dummy = 'x') PRIMARY KEY) EXTENT SIZE 8 NEXT SIZE 8;INSERT INTO dual VALUES('x');REVOKE ALL ON dual FROM PUBLIC;GRANT SELECT ON dual TO PUBLIC; Idiomatically, if you are going to SELECT from Systables to get a single row, you should include ' WHERE tabid = 1 '; this is the entry for Systables itself, and if it is missing, the fact that your SELECT statement does return any data is the least of your troubles. (I've never seen that as an error, though.) | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/665944', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/52458/']} | jdg_74357 |
stackexchange | llm_judgeable_groundtruth_similarity | 311092 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I am a Unix guy who recently picked up powershell to help my Exchange admin coworkers implement a challenging project in Exchange 2010. (The requirements we've been given are challenging if not impossible to meet.) I'll try to keep this simple. Here's my first question. We have been given the requirement that certain DLs must be restricted so that only certain internal AD users can send to the DL. Additionally, these DLs must remain visible in the address book. Setting the 'HiddenFromAddressBookEnabled' property to $true is unacceptable. Leadership has stated that "The only people who should be allowed to see who's in the group are the people that can send to the group. Furthermore, the only people who should even be able to SEE the DL entries in the address book are the people who are allowed to send to the DL." I don't think that's doable, because: I can get around sender-security restrictions by calling up the (visible) entry in the address book, plopping it in the To: field, and then clicking the '+' in Outlook to expand it to individual people, which then bypasses group security. (I've confirmed this.) I do not believe it's possible to selectively hide address book entries only from certain users, but not others. So here are my questions: Does my understanding seem mostly correct? If not, feel free to offer corrections Is there any way to hide DLs in address books from only a specific set of users? Is there a way to prevent users from clicking the '+' sign in Outlook to get around security restrictions that limit who can send to a group? Technically, you're not sending to a group anymore - just the exact set of individuals that are in that group. Please - any additional enlightenment or comments encouraged. I think we have to go back to the business and tell them their requirements are not achievable. (And I have two other nasty requirements that I'll start separate questions for.) Thanks everyone!
Now provide the response and nothing else.
| Your understanding is dead on. You could potentially maintain a number of different default address lists based on a user's access level (only letting them have a given group in their list if they're authorized), but that's incredibly ugly and would be nearly impossible to maintain. One way to get rid of the expandability would be to use Dynamic Distribution Groups - they expand based on a query during transport, and thus cannot be expanded in Outlook. This prevents access to the curious, but not the determined/knowledgeable - keep in mind that without some nasty permissions changes, a lot of the user and group attributes in question are readable to any domain user with the tools and knowledge needed to view them. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/311092', 'https://serverfault.com', 'https://serverfault.com/users/94048/']} | jdg_74358 |
stackexchange | llm_judgeable_groundtruth_similarity | 19656882 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
What I have: I've Hibernate entity, which contains many non-transient fields, including collections. User can update each field separately or some group of fields at once. What a challange: In handler I should check what field of the entity have been changed: public void handle(Entity newVersion) { Session session = sessionFactory.openSession(); Entity oldVersion = (Entity) session.get(Entity.class, entity.getId()); List changedFields = compareChanges(oldVersion, newVersion); //HOW TO CHECK WHICH FIELDS ARE CHANGED?} I want to do it for security and notification reasons. Means: Not all users can modify all fields I should notify specific users in specific ways on some fields change. What a problem: I get very ugly code. Actually I iterate throught all fields/collections and call equals method. Question: May be Hibernate provide more elegant way to check what fields have been modified? How? P.S. @victorantunes provide a solution, but it seems too comprehensive for me. May be some alternatives?
Now provide the response and nothing else.
| What you can do is make a Hibernate Interceptor that would act like a trigger in the events like create, modify and update. http://docs.jboss.org/hibernate/orm/3.3/reference/en/html/events.html so that any point before the given entity is about to be modified and persisted, 1.You can check whether the user has the access (you can get username from session or database) to modify the particular field and accordingly you can grant the access to save or update.2.You can notify the other user about only when the entity is modified. By this way you can make a new session-scope Interceptor in spring's implementation ofHibernate 4 Session session = s.withOptions().interceptor(new YourInterceptor().openSession(); | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19656882', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2761509/']} | jdg_74359 |
stackexchange | llm_judgeable_groundtruth_similarity | 4297219 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Here's the code that won't work : class MyClass{ const myconst = 'somevalue'; private $myvar = array( 0 => 'do something with '.self::myconst );} Seems that class constants are not available at "compile time", but only at runtime.Does anyone know any workaround ? (define won't work) Thanks
Now provide the response and nothing else.
| The problem in your class declaration is not that you are using a constant, but that you are using an expression. Class member variables are called "properties". (...) They are defined by using one of the keywords public, protected, or private, followed by a normal variable declaration. This declaration may include an initialization, but this initialization must be a constant value --that is, it must be able to be evaluated at compile time and must not depend on run-time information in order to be evaluated. This simple declaration, for example, will not compile (parse error): class MyClass{ private $myvar = 3+2;} But if we alter your class declaration to use the simple constant, rather than a string concatenated with that constant it will work as expected. class MyClass{ const myconst = 'somevalue'; public $myvar = array( 0 => self::myconst );}$obj = new MyClass();echo $obj->myvar[0]; As a work-around you could initialize your properties in the constructor: class MyClass{ const myconst = 'somevalue'; public $myvar; public function __construct(){ $this->myvar = array( 0 => 'do something with '.self::myconst ); }}$obj = new MyClass();echo $obj->myvar[0]; I hope this helps you, Alin | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4297219', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/396662/']} | jdg_74360 |
stackexchange | llm_judgeable_groundtruth_similarity | 362013 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
All the time I see examples of the normal/Gaussian distribution with continuous random variables. So my question is do all continuous random variables have a Gaussian distribution?
Now provide the response and nothing else.
| No. There are many continuous probability distributions out of all the probability distributions . There are whole books containing nothing but such things . Some of the non-normal continuous distributions introduced to new students of statistics include: The continuous uniform distribution Student's T distribution The exponential distribution The normal/Gaussian distribution is important because of the Central Limit Theorem (CLT), which shows that for very many situations the sum of randomly distributed independent variables will tend to have a normal distribution, regardless of the constituent variables' original distributions. This can be useful for performing certain kinds of commonly-used statistical inference, which probably contributes to the frequency with which one encounters the normal/Gaussian distribution. Student's T distribution mentioned above gives some formalism to the "tend" in the CLT's "...will tend to have a normal distribution...", and is therefore also useful in these commonly used forms of statistical inference. | {} | {'log_upvote_score': 5, 'links': ['https://stats.stackexchange.com/questions/362013', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/217515/']} | jdg_74361 |
stackexchange | llm_judgeable_groundtruth_similarity | 38915128 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In unqlite c library I found following code: pObj = jx9VmReserveMemObj(&(*pVm),&nIdx); where pVm is: typedef struct jx9_vm jx9_vm;jx9_vm *pVm and function called is declared as: jx9_value * jx9VmReserveMemObj(jx9_vm *, sxu32 *); What for construct &(*pVm) is used in call instead of just pVm ? Is &(*pVm) equivalent to pVm ?
Now provide the response and nothing else.
| Quoting C11 , chapter §6.5.3.2, Address and indirection operators [...] If the operand is the result of a unary * operator, neither that operator nor the & operator is evaluated and the result is as if both were omitted , except that the constraints on the operators still apply and the result is not an lvalue. [...] So, yes , they are equivalent . This construct can be used, however, to check the type of the argument against a pointer type. From the property of unary * operator, The operand of the unary * operator shall have pointer type. So, the construct &(*pVm) will be fine, if pvm is a pointer or array name. will generate compiler error, if pvm is a non-pointer type variable. See the other answer by Alter Mann for code-wise example. One more difference ( in general ) is, pVm can be assigned (can be used as LHS of the assignment operator), but &(*pVm) cannot. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/38915128', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1594178/']} | jdg_74362 |
stackexchange | llm_judgeable_groundtruth_similarity | 37325 |
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
What’s the actual base of the oft-repeated claim that besides the US only Liberia and Myanmar had not officially/fully/completely/… adopted the metric system? For which definition of “not metric” is it or was it true, if any? Examples (Emphasis added.) Wikipedia article Metrication : Since 2006, three countries formally do not use the metric system as their main standard of measurement : the United States, Myanmar, and Liberia.[3]→CIA WFB Wikipedia article Metric system : [Map caption:] Countries which have not officially adopted the metric system (United States, Myanmar, and Liberia) … Many sources also cite Liberia and Myanmar as the only other countries not to have done so. … According to the US Central Intelligence Agency’s Factbook ( 2007 ), the International System of Units has been adopted as the official system of weights and measures by all nations in the world except for Myanmar (Burma), Liberia and the United States,[…] while the NIST has identified the United States as the only industrialised country where the metric system is not the predominant system of units . [75] CIA World Factbook, Appendix G : Note: At this time, only three countries – Burma, Liberia, and the US – have not adopted the International System of Units (SI, or metric system) as their official system of weights and measures. Although use of the metric system has been sanctioned by law in the US since 1866, it has been slow in displacing the American adaptation of the British Imperial System known as the US Customary System. The US is the only industrialized nation that does not mainly use the metric system in its commercial and standards activities, but there is increasing acceptance in science, medicine, government, and many sectors of industry. CNN (2015-07) : Only three nations do not use the metric system today : Myanmar, Liberia and the United States. But calling America a nonmetric nation is somewhat of a misnomer. The United States has given more than an inch even though it might not have gone the whole nine yards. … Still, America is the only industrialized nation in the world that does not conduct business in metric weights and measures. Many, many other references abound. This is an oft-repeated claim, e.g. it’s an anecdotal “fact” often told by teachers introducing the metric system to (US) students. Status The statement sounds unfounded to me and is almost always used to shame Americans by associating them with two exemplary “backwards” countries. The claim has been around a while, at least since the 1970s when the UK and Commonwealth countries formally converted, but the political and commercial situation in many countries (including those notorious three) has changed since, e.g. significantly in Myanmar in 2011. Wikipedia article Metrication : Some sources now identify Liberia as metric, and the government of Myanmar has stated that the country would metricate with a goal of completion by 2019. [6] [7] … ^ The Liberian government has begun transitioning from use of imperial units to the metric system. However, this change has been gradual, with government reports concurrently using both systems. … [50] ^ In June 2011, the Burmese government’s Ministry of Commerce began discussing proposals to reform the measurement system in Burma and adopt the metric system used by most of its trading partners. … [51] [52] [53] [54] Wikipedia article Metric system : However, reports published since 2007 hold this is no longer true of Myanmar or Liberia. [76] An Agence France-Presse report from 2010 stated that Sierra Leone had passed a law to replace the imperial system with the metric system thereby aligning its system of measurement with that used by its Mano River Union (MRU) neighbours Guinea and Liberia. [According to the Agence France-Presse report (2010) Liberia was metric, but Sierra Leone was not metric—a statement that conflicted with the CIA statement (2007).] [77] Reports from Myanmar suggest that the country is also planning to adopt the metric system. [78] The US have signed the Metre Convention early on and metric units are legal for almost all purposes, although sometimes dual-labeling is required and customary-only is frequently encountered (e.g. on road signs). Many “metric” countries, most notatbly the UK, have some remnants of traditional local or colonial systems of measurement. US dominance in some industries or markets has also forced their English units into places where they haven’t been used before, e.g. inch-based typographic points or screens nominally sized in inches per diagonal.
Now provide the response and nothing else.
| From the Ph.D. thesis " The Social Life of Measures: Metrication in the United States and Mexico, 1789--2004 ": As of today [September 2011] there are seven non-metric countries in the world: Liberia, Myanmar, United States, Independent State of Samoa, Federated States of Micronesia, Palau, and Marshall Islands. In the discussions about metrication it is widely assumed that there are only three non-metric countries (Liberia, Myanmar, and the United States), an unfounded assertion that has taken a life of its own and has been repeated thousands of times for more than a decade by academics and persons interested in the history of the metric system (me included). You also seem to be asking whether metrication has completely eradicated non-metric units in countries outside these seven. The answer is no: there is no country in the world where non-metric units are completely banned from official use. All UN member states are part of the International Civil Aviation Organization , which currently requires all operators to be familiar with knots, nautical miles, and feet . Additionally, the U.S. Metric Association states: People like to think of a country as being “metric” or “non-metric,” but deciding which label to apply is difficult because it's not an either/or condition that switches on a particular date. For example, it's often stated that the U.S. is a non-metric country. But while the U.S. is non-metric in some areas, such as road signs, speedometers, and weather reports, it's metric in many other areas, such as food quantity and nutrition labels, and car and machinery manufacturing, and athletes run 100-meter races. Conversely, Canada is generally considered to be metric, and its road signs indeed are, yet it uses yards in its football games and typically uses feet and inches and pounds when describing a person's height and weight. Similarly, it's usually stated that the UK is a metric country, but its road signs are non-metric, just like the U.S. So, beware of reading too much into the “metric” and “non-metric” labels when applied to entire countries. Even the question of whether a country is “officially” metric is harder to answer than you'd think. For example, officially, the U.S. has been metric since 1866, 1893, 1975, or 1988, depending on which official declaration you prefer to cite, and similar uncertainties apply to other countries. Here's a blog with some more examples of non-metric and "soft metric" measurements in Britain. | {} | {'log_upvote_score': 7, 'links': ['https://skeptics.stackexchange.com/questions/37325', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/38840/']} | jdg_74363 |
stackexchange | llm_judgeable_groundtruth_similarity | 21031901 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to send exactly this: POST http://api.outbound.io/api/v1/identifyContent-Type: application/json{ "api_key": "MY_API_KEY", "user_id": "MY_UNIQUE_USER_ID", "traits" : { "email" : "[email protected]", "name" : "Dhruv Mehta", "phone" : "650xxxyyyyy" }} i never did something like this and i've made a lot of research but i couldn't find how to send those parameters to that URL i hope you guys can help me with an example please, best regards!
Now provide the response and nothing else.
| After a lot of research, i find out how to do it... 1.- Use App::uses('HttpSocket', 'Network/Http'); // you should put this on your controller 2.- this on your function $HttpSocket = new HttpSocket(); 3.- Here comes the data you want to send via POST (in this example i will use the variables i've used.. you can replace them, add more or delete some.. it depends on the info you want to send) $data = array( "api_key" => "API KEY", "user_id" => $idUser, "event" => "other", "extra" => array( "course" => $course, "price"=> $price ) ); 3.- You set the Headers $request = array( 'header' => array('Content-Type' => 'application/json', ), ); 4.-json_encode it $data = json_encode($data); 5.- Where are you sending the Post To?, which data?, type of request?, do it this way $response = $HttpSocket->post('http://api.yourweburl.com/api/', $data, $request); *.- You can see the response uncommenting this snippet //pr($response->body()); *.- Finally if you want to redirect somewhere after everything is done.. do it this way... $this->redirect(array('action' => 'index')); You should have something like this. public function actiontooutbound($idUser, $course, $price){ $HttpSocket = new HttpSocket(); $data = array( "api_key" => "API KEY", "user_id" => $idUser, "event" => "other", "extra" => array( "course" => $course, "price"=> $price ) ); $request = array( 'header' => array( 'Content-Type' => 'application/json', ), ); $data = json_encode($data); $response = $HttpSocket->post('http://api.outbound.io/api/v1/track', $data, $request); // pr($data); //pr($response->body()); $this->redirect(array('action' => 'index')); } This is how you call this function from another function (just in case) $this->actiontooutbound($idUser, $course, $price); if you have any questions let me now i'll be happy to help you ;) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21031901', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1599811/']} | jdg_74364 |
stackexchange | llm_judgeable_groundtruth_similarity | 678194 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In the real world, it seems that traveling backwards in time is impossible, but do we have a theorem in physics that would imply this fact? Some people (including Feynman) describe antiparticles as moving in the opposite direction of time coordinate axis. For example, the Dirac field involves an integral of the term $$a(\pmb p,\sigma)e^{ip^\mu x_\mu}+a^{c\dagger}(\pmb p,\sigma)e^{-ip^\mu x_\mu}$$ multiplied with some extra factors( $\eta^{00}=-1$ for the metric). The first term is interpreted as annihilation of an electron propagating forward in space-time, and consequently the second term should create a positron traveling back in time. It does make a bit sense, but seems to be against our intuition. If we apply the time reversal to an arbitrary field $\psi_l(x)$ , the effect is just taking $x$ to $\mathscr Px$ , and multiplying it by a matrix $Q_{ll'}$ , where $$\mathscr P=diag(-1,-1,-1,1)$$ is the space-inversion transformation. Now does the field obtained evolve against the positive direction of time? Is it possible that there is no positive direction for time at all? One could argue that due to the second law of thermodynamics, entropy never decrease with time, so that there must be a positive direction. However, what if the systems going backwards in our world adopt a different percentage and claim that they are going forward and we are going backwards? Besides, although we have quantum statistics, I'm not fully persuaded by such a statistical theory that entropy is well defined on a microscopic scale. It seems that nothing is able to forbid the existence of a system with time reversed, but also we're unable to detect it (or we did but we didn't know that). I'm looking for someone who has an explanation for this.
Now provide the response and nothing else.
| There are two aspects to this. First we can clarify the quantum field theory a bit. Secondly we need to distinguish the use of the concept "time" in discussions of quantum unitary evolution (Schrodinger's equation) from the concept of "time" in complicated evolution leading to thermodynamic irreversibility. These concepts are related (which is why they have the same name) but the relationship is quite subtle. In the Feynman path integral method it is not that a positron goes backward in time; it is rather than a positron going forwards in time contributes to the equations just as would an electron going backwards in time. So we can think of a positron going forwards in time as if it was an electron going backwards in time, at least for the purposes of writing down the Feynman propagator and thus finding solutions to Schrodinger's equation. But in these calculations "time" is a parameter and the evolution is unitary. That means really the behaviour establishes connections between what goes on at timelike-separated regions of spacetime, without really caring whether the evolution is going one way or the other. It is only by invoking the wider meaning of "time" (broadly speaking, the thermodynamic meaning) that we get a sense of direction. For that you have to look at larger numbers of worldlines all weaving together in complicated patterns, and you find that in the limit of large numbers of processes, the entropy gets bigger in one direction, and that is the direction we call the future. If I understood correctly, I think the question asks whether there could be stuff evolving backwards in time with entropy decreasing as it goes. It seems to me that if one expresses this idea in more detail, one might end up with a scenario identical to the one we observe and all we have succeeded in doing is attach different words to it. That would be like someone who says "a circle is triangular" and then we say "no it is not" and then they say "yes it is, because I have defined 'triangular' in a new way which makes it observationally indistinguishable from 'circular'". Obviously that kind of word-play does not aid understanding so there is no point in doing it. If, on the other hand, one suggests that matter could travel backwards in time while retaining some sort of memory of the future then this would be contrary to the patterns of the physical world as they have been discovered up till now, because it would be contrary to the second law of thermodynamics and it would reverse the direction of cause and effect. But the question now touches on how the second law of thermodynamics relates to statistical mechanics and ultimately quantum field theory. This remains something of a puzzle I think. There are theorems such as Boltzmann's H-theorem which go a long way to establishing the connection, but it is not fully resolved as far as I am aware. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/678194', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/312432/']} | jdg_74365 |
stackexchange | llm_judgeable_groundtruth_similarity | 136752 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am suddenly struck by the question of whether gravitation affects magnetism in some way. On the other hand, gravity is a weak force, but magnetism seems to be a strong force, so would magnetism affect gravity? Or do they "ignore" each other, being forces which do not interact? The answer to this is related to this question: If the earth's core were to cool so that it were no longer liquid, no longer rotated, and thus produced no magnetic field, would this do anything to earth's gravity?
Now provide the response and nothing else.
| The electromagnetic field tensor $F_{\mu\nu}$ which encodes all the information about the electric and magnetic field, certainly contributes to the energy-stress tensor $T_{\mu\nu}$, which appears in the Einstein Field Equations: $$G_{\mu\nu}= 8\pi G T_{\mu\nu}$$The left hand side of this equation encodes the geometry of spacetime, while the right hand side describes the 'sources' of gravity. Therefore, we can say that magnetism does have an effect on the geometry of spacetime i.e. gravity. | {} | {'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/136752', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/5279/']} | jdg_74366 |
stackexchange | llm_judgeable_groundtruth_similarity | 40919920 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
First of all, I'm new to rxswift so I guess the answer is obvious however at the moment I can't find solution by myself. I have two functions: func downloadAllTasks() -> Observable<[Task]>func getTaskDetails(taskId: Int64) -> Observable<TaskDetails> First one is downloading the list of Task objects using network request, second one downloading task details for sepcific task (using it's id) What I want of achieve is to download all tasks and then for each task I want to download its details and subscribe for the event fired when all tasks details are ready. So I guess I should subscribe somehow to Observable<[TaskDetails]> but I don't know how to do it. downloadAllTasks() .flatMap{ ... // flatMap? something else? } .subscribe( onNext: { details in print("tasks details: \(details.map{$0.name})") }) .addDisposableTo(disposeBag) //EDIT Thanks to Silvan Mosberger answer I'm much closer to the solution. One problem left. Now I have something like this: downloadAllTasks() .flatMap{ Observable.from($0) } .map{ $0.id } .flatMap{ [unowned self] id in self.getTaskDetails(taskId: id).catchError{ error in print("$$$ Error downloading task \(id)") return .empty() } } .do(onNext: { _ in print(" $$$ single task details downloaded") } ) .toArray() .debug("$$$ task details array debug", trimOutput: false) .subscribe({ _ in print("$$$ all tasks downloaded") }) .addDisposableTo(disposeBag) The output is $$$ task details array debug -> subscribed$$$ single task details downloaded$$$ single task details downloaded$$$ single task details downloaded There are 3 tasks available so as you can se all of them are downloaded properly however for some reason the result of toArray() - ( Observable<[TaskDetails]> ) doesn't produce "onNext" once all task details are ready. // Edit once more Ok, I'm adding simplified version of functions providing observables, maybe it will help something func downloadAllTasks() -> Observable<Task> { return Observable.create { observer in //... network request to download tasks //... for task in tasks { observer.onNext(task) } observer.onCompleted() return Disposables.create() }}func getTaskDetails(id: Int64) -> Observable< TaskDetails > { return Observable.create { observer in //... network request to download task details //... observer.onNext(taskDetails) return Disposables.create() }}
Now provide the response and nothing else.
| With RxSwift you want to use Observable s whenever possible, therefore I recommend you to refactor the downloadAllTasks method to return an Observable<Task> . This should be fairly trivial by just looping through the elements instead of emitting the array directly: // In downloadAllTasks() -> Observable<Task>for task in receivedTasks { observable.onNext(task)} If this is not possible for whatever reason, there is also an operator for that in RxSwift: // Converts downloadAllTasks() -> Observable<[Task]> to Observable<Task>downloadAllTasks().flatMap{ Observable.from($0) } In the following code I will be using the refactored downloadAllTasks() -> Observable<Task> method because it's the cleaner approach. You can then map your tasks to get their id (assuming your Task type has the id: Int64 property) and flatMap with the downloadAllTasks function to get an Observable<TaskDetails> : let details : Observable<TaskDetails> = downloadAllTasks() .map{ $0.id } .flatMap(getTaskDetails) Then you can use the toArray() operator to gather the whole sequence and emit an event containing all elements in an array: let allDetails : Observable<[TaskDetails]> = details.toArray() In short, without type annotations and sharing the tasks (so you won't download them only once): let tasks = downloadAllTasks().share()let allDetails = tasks .map{ $0.id } .flatMap(getTaskDetails) .toArray() EDIT: Note that this Observable will error when any of the detail downloads encounters an error. I'm not exactly sure what's the best way to prevent this, but this does work: let allDetails = tasks .map{ $0.id } .flatMap{ id in getTaskDetails(id: id).catchError{ error in print("Error downloading task \(id)") return .empty() } } .toArray() EDIT2: It's not gonna work if your getTaskDetails returns an observable that never completes. Here is a simple reference implementation of getTaskDetails (with String instead of TaskDetails ), using JSONPlaceholder : func getTaskDetails(id: Int64) -> Observable<String> { let url = URL(string: "https://jsonplaceholder.typicode.com/posts/\(id)")! return Observable.create{ observer in let task = URLSession.shared.dataTask(with: url) { data, response, error in if let error = error { observer.onError(error) } else if let data = data, let result = String(data: data, encoding: .utf8) { observer.onNext(result) observer.onCompleted() } else { observer.onError("Couldn't get data") } } task.resume() return Disposables.create{ task.cancel() } }} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/40919920', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3077831/']} | jdg_74367 |
stackexchange | llm_judgeable_groundtruth_similarity | 24686119 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using Angular-UI's carousel and I need to tell my google charts to redraw after they have slid into view. In spite of what I've read, I can't seem to hook into the event. See my attempt: http://plnkr.co/edit/Dt0wdzeimBcDlOONRiJJ?p=preview HTML: <carousel id="myC" interval="myInterval"> <slide ng-repeat="slide in slides" active="slide.active"> <img ng-src="{{slide.image}}" style="margin:auto;"> <div class="carousel-caption"> <h4>Slide {{$index}}</h4> <p>{{slide.text}}</p> </div> </slide></carousel> On document load: $('#myC').live('slid.bs.carousel', function (event) { console.log("slid"); } ); It should work something like this: http://jsfiddle.net/9fwuq/ - non-angular-ui carousel Perhaps there is a more Angular way to hook into the fact that my chart has slid into view?
Now provide the response and nothing else.
| There are 3 ways I can think of and that depends of your requirement. Please see http://plnkr.co/edit/FnI8ZX4UQYS9mDUlrf6o?p=preview for examples. use $scope.$watch for an individual slide to check if it is become active. $scope.$watch('slides[0].active', function (active) { if (active) { console.log('slide 0 is active'); }}); use $scope.$watch with custom function to find an active slide. $scope.$watch(function () { for (var i = 0; i < slides.length; i++) { if (slides[i].active) { return slides[i]; } }}, function (currentSlide, previousSlide) { if (currentSlide !== previousSlide) { console.log('currentSlide:', currentSlide); }}); use a custom directive to intercept select() function of the carousel directive. .directive('onCarouselChange', function ($parse) { return { require: 'carousel', link: function (scope, element, attrs, carouselCtrl) { var fn = $parse(attrs.onCarouselChange); var origSelect = carouselCtrl.select; carouselCtrl.select = function (nextSlide, direction) { if (nextSlide !== this.currentSlide) { fn(scope, { nextSlide: nextSlide, direction: direction, }); } return origSelect.apply(this, arguments); }; } };}); and use it like this: $scope.onSlideChanged = function (nextSlide, direction) { console.log('onSlideChanged:', direction, nextSlide);}; and in html template: <carousel interval="myInterval" on-carousel-change="onSlideChanged(nextSlide, direction)">... Hope this help : ) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/24686119', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/716384/']} | jdg_74368 |
stackexchange | llm_judgeable_groundtruth_similarity | 38811421 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Let's say I have two arrays, var PlayerOne = ['B', 'C', 'A', 'D'];var PlayerTwo = ['D', 'C']; What is the best way to check if arrayTwo is subset of arrayOne using javascript? The reason: I was trying to sort out the basic logic for a game Tic tac toe, and got stuck in the middle. Here's my code anyway... Thanks heaps! var TicTacToe = { PlayerOne: ['D','A', 'B', 'C'], PlayerTwo: [], WinOptions: { WinOne: ['A', 'B', 'C'], WinTwo: ['A', 'D', 'G'], WinThree: ['G', 'H', 'I'], WinFour: ['C', 'F', 'I'], WinFive: ['B', 'E', 'H'], WinSix: ['D', 'E', 'F'], WinSeven: ['A', 'E', 'I'], WinEight: ['C', 'E', 'G'] }, WinTicTacToe: function(){ var WinOptions = this.WinOptions; var PlayerOne = this.PlayerOne; var PlayerTwo = this.PlayerTwo; var Win = []; for (var key in WinOptions) { var EachWinOptions = WinOptions[key]; for (var i = 0; i < EachWinOptions.length; i++) { if (PlayerOne.includes(EachWinOptions[i])) { (got stuck here...) } } // if (PlayerOne.length < WinOptions[key]) { // return false; // } // if (PlayerTwo.length < WinOptions[key]) { // return false; // } // // if (PlayerOne === WinOptions[key].sort().join()) { // console.log("PlayerOne has Won!"); // } // if (PlayerTwo === WinOptions[key].sort().join()) { // console.log("PlayerTwo has Won!"); // } (tried this method but it turned out to be the wrong logic.) } },};TicTacToe.WinTicTacToe();
Now provide the response and nothing else.
| Here is the solution: Using ES7 ( ECMAScript 2016 ) : const result = PlayerTwo.every(val => PlayerOne.includes(val)); Snippet : const PlayerOne = ['B', 'C', 'A', 'D'];const PlayerTwo = ['D', 'C'];const result = PlayerTwo.every(val => PlayerOne.includes(val));console.log(result); Using ES5 ( ECMAScript 2009 ) : var result = PlayerTwo.every(function(val) { return PlayerOne.indexOf(val) >= 0;}); Snippet : var PlayerOne = ['B', 'C', 'A', 'D'];var PlayerTwo = ['D', 'C'];var result = PlayerTwo.every(function(val) { return PlayerOne.indexOf(val) >= 0;});console.log(result); Here is answer the question at the comment below: How do we handle duplicates? Solution: It is enough to add to the above solution, the accurate condition for checking the number of adequate elements in arrays: const result = PlayerTwo.every(val => PlayerOne.includes(val) && PlayerTwo.filter(el => el === val).length <= PlayerOne.filter(el => el === val).length); Snippet for first case : const PlayerOne = ['B', 'C', 'A', 'D'];const PlayerTwo = ['D', 'C'];const result = PlayerTwo.every(val => PlayerOne.includes(val) && PlayerTwo.filter(el => el === val).length <= PlayerOne.filter(el => el === val).length);console.log(result); Snippet for second case : const PlayerOne = ['B', 'C', 'A', 'D'];const PlayerTwo = ['D', 'C', 'C'];const result = PlayerTwo.every(val => PlayerOne.includes(val) && PlayerTwo.filter(el => el === val).length <= PlayerOne.filter(el => el === val).length);console.log(result); | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/38811421', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6644745/']} | jdg_74369 |
stackexchange | llm_judgeable_groundtruth_similarity | 272893 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The title question is rather illustrative. I suppose the real question would be: Is heat cumulative? Put back into an example: If I have a lit candle right beneath an iron bar, assuming the candle will remain lit indefinitely, and that the heat-losing rate is below the heat-getting rate. Will the bar eventually reach the needed temperature for it to melt? If the answer is no : Once the iron bar reached the max temp the candle can get it to. Where does all of the energy (heat) go after? EDIT: The question "is heat cumulative?" can be ignored as it is out of place and is misleading. Althrough the answer for it is "yes", it doesn't mean the answer for the general question is also "yes". The point is not if it is actually possible to melt iron with a candle. Iron and candle are mere parts of the illustration, their properties are irrelevant. A better phrasing of the main question would be: Could a heating object contained in a perfectly closed system push the temperature of the system above its own temperature?
Now provide the response and nothing else.
| I'll try a simple explanation. Assume that there are no phase transitions initially. As you heat a body, its temperature rises, and it radiates energy into the surrounding space according to $$P = A\varepsilon \sigma T^4$$ ($\sigma$ is the Stefan Boltzman constant, $A$ is the surface area, $T$ is the temperature, $\varepsilon$ is the emmissivity). Obviously, $P$ increases as $T$ increases. So, if before the bar reaches its melting point, $P$ becomes equal to the power input (from the flame), there will be no net flow of energy across the surrounding-bar interface (for if there were, $P$ would be greater than the input power, and more energy will be lost than gained, taking it back to the stable equilibrium point). At this point, notice that the temperature is constant, because any energy supplied by the candle is equivalently emmitted by the body (sort of dynamic equilibrium). Of course, this is only possible if the bar doesn't melt before this temperature is reached. | {} | {'log_upvote_score': 7, 'links': ['https://physics.stackexchange.com/questions/272893', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/124883/']} | jdg_74370 |
stackexchange | llm_judgeable_groundtruth_similarity | 6906776 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to create a Joomla 1.7 content plugin that, ultimately, will send an email when a brand new article has been created on the backend. My plugin, while it installed correctly, doesn't seem to be firing properly. I modified by plugin so that it should, on the creation of a new article, cancel the save event and display an error message instead. This isn't happening, and articles are being saved just fine. Am I missing something obvious here? I even tried adding a die() and mail() command inside the onBeforeContentSave() method and it never got executed. notifyy.xml <?xml version="1.0" encoding="utf-8"?><extension version="1.7" type="plugin" group="content"> <name>Content - Notifyy</name> <author>Andy Soell</author> <creationDate>August 1, 2011</creationDate> <copyright></copyright> <authorEmail>[email protected]</authorEmail> <authorUrl>http://andy.teamsoell.com</authorUrl> <version>1.0</version> <description>Notification for new articles</description> <files> <filename plugin="notifyy">notifyy.php</filename> </files> </extension> notifyy.php jimport( 'joomla.plugin.plugin' );class plgContentNotifyy extends JPlugin { function plgContentNotifyy( &$subject, $params ) { parent::__construct( $subject, $params ); } function onBeforeContentSave( &$article, $isNew ) { global $mainframe; $article->setError("i don't want to save this"); return false; } }
Now provide the response and nothing else.
| You cannot use DbContext for that - DbContext API is just dumb wrapper with only most needed functionality. For everything more complex you must convert DbContext back to ObjectContext and use it. Try something like this: Extract key names : public static string[] GetEntityKeyNames<TEntity>(this DbContext context) where TEntity : class{ if (context == null) throw new ArgumentNullException("context"); var set = ((IObjectContextAdapter)context).ObjectContext.CreateObjectSet<TEntity>(); var entitySet = set.EntitySet; return entitySet.ElementType.KeyMembers.Select(k => k.Name).ToArray();} Here's a method that will extract the key values of an entity: public static IEnumerable<object> GetEntityKeys<TEntity>(this DbContext context, TEntity entity) where TEntity : class{ if (context == null) throw new NullReferenceException("context"); var type = typeof(TEntity); var set = ((IObjectContextAdapter)context).ObjectContext.CreateObjectSet<TEntity>(); var entitySet = set.EntitySet; var keys = entitySet.ElementType.KeyMembers; var props = keys.Select(k => type.GetProperty(k.Name)); return props.Select(p => p.GetValue(entity));} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6906776', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/309252/']} | jdg_74371 |
stackexchange | llm_judgeable_groundtruth_similarity | 53661818 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using Azure DevOps for the first time to host my next project. When I created an empty project on Azure Devops, I tried to clone it. While attempting to clone, it asked me for my password. I entered it and it says I have a wrong password but I know I'm typing my password correctly. I then installed git-credential-manager for max/linux onto my mac. I tried to do a git pull from the remote origin, but it just prompted for a password like the first time and failed. I also tried to use the "Create git credentials" feature on Azure Devops, in which I can specify a new password. That failed as well with the same error. I submitted a GitHub issue on their repository here . Is there no way for me to clone my project? The url for my repository is in the form of https://[email protected]/... . Update: I tried this on my Windows machine and it gave me a popup to login and it worked fine.
Now provide the response and nothing else.
| Make sure your git credential helper is declared to your local Git: git config credential.helper If the output is empty, type: git config --global credential.helper manager-core Then try again. However, the OP AskYous correctly pinpoint another issue in the comment : Can I tell it what my username is? I think my username is my email address, because I use my organization account to sign in. In that case, the URL would be: https://me%[email protected]/yousefshanawany/<repo-name>/ Note the @ of the email must be percent-encoded . Note that using a PAT (Personal Access Token) is an alternative mentioned by the OP , and mandatory if you have 2FA activated . Update June 2019: This question was referenced in microsoft/Git-Credential-Manager-for-Mac-and-Linux issue 104 , which just got closed with a comment from John Briggs from Microsoft : I'd recommend trying GCM Core's macOS preview release Maksym Pecheniuk points out in the comments : solution for IntelliJ Idea: " Git IntelliJ "Authentication Failed” " | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/53661818', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1404347/']} | jdg_74372 |
stackexchange | llm_judgeable_groundtruth_similarity | 7040078 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Since Chrome updated to v14, they went from version three of the draft to version eight of the draft . I have an internal chat application running on WebSocket, and although I've gotten the new handshake working, the data framing apparently has changed as well. My WebSocket server is based on Nugget . Does anybody have WebSocket working with version eight of the draft and have an example on how to frame the data being sent over the wire?
Now provide the response and nothing else.
| (See also: How can I send and receive WebSocket messages on the server side? ) It's fairly easy, but it's important to understand the format. The first byte is almost always 1000 0001 , where the 1 means "last frame", the three 0 s are reserved bits without any meaning so far and the 0001 means that it's a text frame (which Chrome sends with the ws.send() method). ( Update: Chrome can now also send binary frames with an ArrayBuffer . The last four bits of the first byte will be 0002 , so you can differ between text and binary data. The decoding of the data works exactly the same way.) The second byte contains of a 1 (meaning that it's "masked" (encoded)) followed by seven bits which represent the frame size. If it's between 000 0000 and 111 1101 , that's the size. If it's 111 1110 , the following 2 bytes are the length (because it wouldn't fit in seven bits), and if it's 111 1111 , the following 8 bytes are the length (if it wouldn't fit in two bytes either). Following that are four bytes which are the "masks" which you need to decode the frame data. This is done using xor encoding which uses one of the masks as defined by indexOfByteInData mod 4 of the data. Decoding simply works like encodedByte xor maskByte (where maskByte is indexOfByteInData mod 4 ). Now I must say I'm not experienced with C# at all, but this is some pseudocode (some JavaScript accent I'm afraid): var length_code = bytes[1] & 127, // remove the first 1 by doing '& 127' masks, data;if(length_code === 126) { masks = bytes.slice(4, 8); // 'slice' returns part of the byte array data = bytes.slice(8); // and accepts 'start' (inclusively)} else if(length_code === 127) { // and 'end' (exclusively) as arguments masks = bytes.slice(10, 14); // Passing no 'end' makes 'end' the length data = bytes.slice(14); // of the array} else { masks = bytes.slice(2, 6); data = bytes.slice(6);}// 'map' replaces each element in the array as per a specified function// (each element will be replaced with what is returned by the function)// The passed function accepts the value and index of the element as its// argumentsvar decoded = data.map(function(byte, index) { // index === 0 for the first byte return byte ^ masks[ index % 4 ]; // of 'data', not of 'bytes' // xor mod}); You can also download the specification which can be helpful (it of course contains everything you need to understand the format). | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/7040078', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/411122/']} | jdg_74373 |
stackexchange | llm_judgeable_groundtruth_similarity | 179917 |
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The major commercial password manager companies claim to have a "zero knowledge" system. This means the master password of the user is the only way to decrypt the data and it's is not stored anywhere. So even the company doesn't know the master password or has access to the users data.I've put hands on KeeperSecurity , LastPass , Dashlane and 1Password . So far so good. But KeeperSecurity and LastPass provide a possibility to recover an account when the user has lost his master password. I mean how is this possible, if the master password is the only way to access the data and nobody knows it except the user? KeeperSecurity manages this by saving a second copy of the users data which is not encrypted with the master password, but with a security question, previously asked, and answer.But this means there is another way of accessing the users data. And there is the chance that somebody knows the security answer. Okay the recovery requieres access to the email account and to a second auth factor. BUT still there is another chance of accessing the data besides the master password! Dashlane and 1Password do not provide an account recovery. How can those password managers pretend to be secure and claim that the master password is the only way to access the data, but on the other hand provide a recovery option. What do you think about password managers with recovery option? I mean everybody who is using a password manager should be aware of loosing the master password. Is a password manager with recovery option really trustworthy? Maybe someone could give a little valuation about security of such recovery systems.
Now provide the response and nothing else.
| Having a recovery option is perfectly fine as long as it is adequately secured. Whether you prefer losing your passwords if you lose the master password or want to trust the password manager company with access to your passwords is up to you. Keep in mind that security is a means to an end. There are always trade-offs to be made. You can have a perfectly secure computer by not having a computer. Usability suffers, though. Edit: Since people in the comments want the specific question of "how secure are password managers with recovery option" answered, I will add some points: It is possible to provide recovery options which are secure, but this always depends on the attacker model. Security ALWAYS depends on the attacker model : Passwords are very bad against-mind readers! Here are some possible ideas for reasonably secure recovery schemes: Encrypt a copy of the database under a different key and ask your users to print this key and store it in their safe. That's what keybase.io does, or the Ubuntu drive encryption. Store a DB copy which requires a key from the company on the users machine. Have two departments at the company, one storing a DB copy, the other storing necessary the key. If you want to recover, both verify your identity and send you their part. However, none of these mean that it is impossible for somebody else to get your passwords. (not using "zero knowledge" because that has specific meaning which works completely different when applied to passwords) After all employees might collude or safes can be broken into. Every recovery option for you is also a recovery option, or an alternative attack path for stealing your passwords. Remember: If you want to ask "How secure is X?" your attacker model must be obvious or stated. Security always depends on the attacker model! There is no absolute security. | {} | {'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/179917', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/170725/']} | jdg_74374 |
stackexchange | llm_judgeable_groundtruth_similarity | 34841000 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm starting out a new Laravel 5.2 project, and I've run into a strange authentication problem: Auth::check() returns false constantly. If you think this is a duplicate, read on. I've tried multiple suggestions to fixing this issue with no luck. The setting: The application environment is Laravel Homestead's php-7 branch (Vagrant box). User models use UUIDs instead if IDs (table has auto-incrementing id switched to a 36-character uuid primary key column). User model primaryKey has been changed from id to uuid and incrementing has been set to false. Used artisan make:auth to generate authentication baseline to work with. Authentication routes have been set for getLogin , postLogin , getEmail , postEmail , getReset and postReset and they are working fine. web middleware is used for all routes in the application at the moment. Auth config and session config inside config dir are at defaults (except a namespace change for auth model). Passing in invalid credentials triggers validation errors properly. After logging in the redirectPath on AuthController does work, but the guest middleware hijacks the request and redirects user back to login route. I can create users fine and change their forgotten passwords using the created route definitions. I can "log in" on the login route, but Auth::check() always returns false in the guest middleware ( Middleware\Authenticate class). I've tried the following session drivers without success: File Redis Cookie Database I even tried to revert the users table primary key to named id , but that didn't change anything. Something I noticed: as my user IDs are actually char(36) UUIDs, the database session driver did not save the user_id properly: it saved the integer characters from the user UUID and then cut off the rest of the value (e.g. users.uuid 3f2358907afafaf becomes session.user_id 3 ), or 0 if the user ID starts with a non-numeric character ( a-f ). I've found a huge amount of questions and threads and search results for this same issue. I tried the things most said would work, but I didn't get it working still: I have set the $user->primaryKey protected property to uuid (which affects $model->getKey() and $model->getAuthIdentifier() too). I have tried multiple different session drivers with no success. I have tried reverting to the users.id column name instead of users.uuid , no change. I have cleared all caches I can think of. I have checked that there is no whitespace output before views output (need to dig a bit deeper though in case I missed some files). The only thing which I have not properly tried yet is to just switch back the auto-incrementing IDs for users (instead of UUIDs). I've tied the UUID concept pretty tight already but I'll see what I can do to solve this issue. Please do ask for details if I'm missing something in this question.
Now provide the response and nothing else.
| You could use the mapStateToProps function to query the store when you connect the component to redux: import React from 'react';import { connect } from 'react-redux';import _ from 'lodash';const Foo = ({ item }) => <div>{JSON.stringify(item)}</div>;const mapStateToProps = (state, ownProps) => ({ item: _.find(state, 'id', ownProps.params.id)});export default connect(mapStateToProps)(Foo); (This example uses lodash - _ ) The mapStateToProps function takes in the whole redux state and your component's props, and from that you can decide what to send as props to your component. So given all of our items, look for the one with the id matching our URL. https://github.com/rackt/react-redux/blob/master/docs/api.md#arguments | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34841000', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/758088/']} | jdg_74375 |
stackexchange | llm_judgeable_groundtruth_similarity | 11934001 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I updated my android sdk, but when I create a new application I get new option on that I've never seen before. what this mean .I give a name to this parent activity , but when I run the application I get this error No Launcher activity found!The launch will only sync the application package on the device! The full console output [2012-08-13 13:54:35 - GG] ------------------------------[2012-08-13 13:54:35 - GG] Android Launch![2012-08-13 13:54:35 - GG] adb is running normally.[2012-08-13 13:54:35 - GG] No Launcher activity found![2012-08-13 13:54:35 - GG] The launch will only sync the application package on the device![2012-08-13 13:54:35 - GG] Performing sync[2012-08-13 13:54:35 - GG] Automatic Target Mode: launching new emulator with compatible AVD 'Androidvirtual'[2012-08-13 13:54:35 - GG] Launching a new emulator with Virtual Device 'Androidvirtual'[2012-08-13 13:54:35 - GG] ------------------------------[2012-08-13 13:54:35 - GG] Android Launch![2012-08-13 13:54:35 - GG] adb is running normally.[2012-08-13 13:54:35 - GG] No Launcher activity found![2012-08-13 13:54:35 - GG] The launch will only sync the application package on the device![2012-08-13 13:54:35 - GG] Performing sync[2012-08-13 13:54:35 - GG] Automatic Target Mode: launching new emulator with compatible AVD 'Androidvirtual'[2012-08-13 13:54:35 - GG] Launching a new emulator with Virtual Device 'Androidvirtual'[2012-08-13 13:54:48 - Emulator] WARNING: Data partition already in use. Changes will not persist![2012-08-13 13:54:50 - Emulator] WARNING: SD Card image already in use: /home/belkacem/.android/avd/Androidvirtual.avd/sdcard.img[2012-08-13 13:54:50 - GG] New emulator found: emulator-5554[2012-08-13 13:54:50 - GG] Waiting for HOME ('android.process.acore') to be launched...[2012-08-13 13:54:51 - Emulator] WARNING: Cache partition already in use. Changes will not persist![2012-08-13 13:54:51 - GG] New emulator found: emulator-5556[2012-08-13 13:54:51 - GG] Waiting for HOME ('android.process.acore') to be launched...[2012-08-13 13:55:18 - Emulator] Failed to create Context 0x3005[2012-08-13 13:55:18 - Emulator] emulator: WARNING: Could not initialize OpenglES emulation, using software renderer.[2012-08-13 13:55:18 - GG] emulator-5556 disconnected! Cancelling 'sync'![2012-08-13 13:55:20 - Emulator] Failed to create Context 0x3005[2012-08-13 13:55:20 - Emulator] emulator: WARNING: Could not initialize OpenglES emulation, using software renderer.[2012-08-13 13:55:20 - GG] emulator-5554 disconnected! Cancelling 'sync'! manifest.xml <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="main.java" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="16" android:targetSdkVersion="15" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name=".MainActivity" android:label="@string/title_activity_main" > <meta-data android:name="android.support.PARENT_ACTIVITY" android:value="d" /> </activity> </application></manifest>
Now provide the response and nothing else.
| It seems to be just a GCC bug in a special case when the lambda tries to access a protected member from parent class using fully qualified name. This does not work : class Base {protected: bool hasChild() const { return !childs_.empty(); } std::vector<Foo> childs_;};class Foo : public Base {public: bool hasGrandChild() const { return std::any_of(childs_.begin(), childs_.end(), [](Foo const &foo) { return foo.Base::hasChild(); }); } }; , but this works : class Foo : public Base {public: bool hasGrandChild() const { return std::any_of(childs_.begin(), childs_.end(), [](Foo const &foo) { return foo.hasChild(); }); } }; According to C++11, 5.1.2/3: The type of the lambda-expression (which is also the type of the closure object) is a unique, unnamed non-union class type — called the closure type — whose properties are described below. This class type is not an aggregate (8.5.1). The closure type is declared in the smallest block scope, class scope, or namespace scope that contains the corresponding lambda-expression . And then C++11, 11.7/1: A nested class is a member and as such has the same access rights as any other member. So the mentioned function-local lambda should have the same access rights as any other member of the class. Therefore it should be able to call a protected method from a parent class. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11934001', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1344201/']} | jdg_74376 |
stackexchange | llm_judgeable_groundtruth_similarity | 32889 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
[K] refers to Kontsevich's paper "Deformation quantization of Poisson manifolds, I". Background Let $X$ be a smooth affine variety (over $\mathbb{C}$ or maybe a field of characteristic zero) or resp. a smooth (compact?) real manifold. Let $A = \Gamma(X; \mathcal{O}_X)$ or resp. $C^\infty(X)$ . Denote the dg Lie algebra of polyvector fields on $X$ (with Schouten-Nijenhuis bracket and zero differential) by $T$ . Denote the dg Lie algebra of the shifted Hochschild cochain complex of $A$ (with Gerstenhaber bracket and Hochschild differential) by $D$ . Then the Hochschild-Konstant-Rosenberg theorem states that there is a quasi-isomorphism of dg vector spaces from $T$ to $D$ . However, the HKR map is not a map of dg Lie algebras . It is not a map of dg algebras , either (where the multiplication on $T$ is given by the wedge product and the multiplication on $D$ is given by the cup product of Hochschild cochains). I believe "Kontsevich formality" refers to the statement that, while the HKR map is not a quasi-isomorphism --- or even a morphism --- of dg Lie algebras , there is an $L_\infty$ quasi-isomorphism $U$ from $T$ to $D$ , and therefore $D$ is in fact formal as a dg Lie algebra . The first "Taylor coefficient" of the $L_\infty$ morphism $U$ is precisely the HKR map (see section 4.6.2 of [K]). Moreover, this quasi-isomorphism $U$ is compatible with the dg algebra structures on $T$ and $D$ (see section 8.2 of [K]), and it yields a "corrected HKR map" which is a dg algebra quasi-isomorphism. The "correction" comes from the square root of the $\hat{A}$ class of $X$ . See this previous MO question . Questions (0) Are all of my statements above correct? (1) In what way is the $L_\infty$ morphism $U$ compatible with the dg algebra structures? I don't understand what this means. (2) When $X$ is a smooth (compact?) real manifold, I think that all of the statements above are proved in [K]. When $X$ is a smooth affine variety, I think that the statements should all still be true. Where can I find proofs? (3) Moreover, the last section of [K] suggests that the statements are all still true when $X$ is a smooth possibly non-affine variety. For a general smooth variety, though, instead of taking the Hochschild cochain complex of $A = \Gamma(X;\mathcal{O}_X)$ , presumably we should take the Hochschild cochain complex of the (dg?) derived category of $X$ . Is this correct? If so, where can I find proofs? In the second-to-last sentence of [K], Kontsevich seems to claim that the statements for varieties are corollaries of the statements for real manifolds, but I don't see how this can possibly be true. In the last sentence of the paper, he says that he will prove these statements "in the next paper", but I'm not sure which paper "the next paper" is, nor am I even sure that it exists, since "Deformation quantization of Poisson manifolds, II" doesn't exist. P.S. I am not sure how to tag this question. Feel free to tag it as you wish.
Now provide the response and nothing else.
| To (1): Daniel is right, there is a map of homotopy Gerstenhaber algebras between the two algebras. However the full story is quite complicated and to show that the hochschild cochains form a homotopy Gerstenhaber algebra is hard, it's known as the Deligne conjecture. I don't know the details of the proof. Recall that a Poisson algebra is a commutative algebra with a Lie bracket and these two products satisfy a Leibniz identity. A Gerstenhaber algebra is a bit like a Poisson algebra, except the Lie bracket is of degree 1 not 0. The bracket satisfies a graded Leibniz identity wrt to the commutative algebra structure. The formality morphism as homotopy Gerstenhaber algebras restricts to a formality morphism as homotopy Lie algebras and to a formality morphism as homotopy commutative algebras. In my view the simplest proof of the formality of the Hochschild cochains of a nice enough algebra as a homotopy Gerstenhaber algebra is contained in http://arxiv.org/abs/math.KT/0605141 | {} | {'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/32889', 'https://mathoverflow.net', 'https://mathoverflow.net/users/83/']} | jdg_74377 |
stackexchange | llm_judgeable_groundtruth_similarity | 1737047 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The question is here, Let R = XY and let A be the event X< 0.5. Evaluate E[R |A]. And I know the the fX(x) The given answer is here: The event A leaves us with a right triangle with a constant height. The conditional PDF is then 1/area =8. Theconditional expectationyields: And my solution is below: There seems some difference between my answer and given answer... Why am I wrong? I check it for several times and do not know... Thanks your help! I don't know how the type math symbols.. Sorry..
Now provide the response and nothing else.
| If $p\mid(r^4+1)$ for one of $p=3,5,7$ then $-1$ is a fourth power mod $p$, hence $(\mathbb{Z}/p\mathbb{Z})^{\times}$ has an element of order 8. But this is impossible because $(\mathbb{Z}/p\mathbb{Z})^{\times}$ has order $p-1\leq 6$. | {} | {'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/1737047', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/326854/']} | jdg_74378 |
stackexchange | llm_judgeable_groundtruth_similarity | 7621341 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We've all seen the interesting subdomains that you get automatically assigned when you deploy an app to Heroku with a bare "heroku create". Some examples: blazing-mist-4652, electric-night-4641, morning-frost-5543, radiant-river-7322, and so on. It seems they all follow a adjective-noun-4digitnumber pattern (for the most part). Did they simply type out a dictionary of some adjectives and nouns, then choose combinations from them at random when you push an app? Is there a Ruby gem that accomplishes this, perhaps provides a dictionary which one could search by parts of speech, or is this something to be done manually?
Now provide the response and nothing else.
| Engineer at the Heroku API team here: we went with the simplest approach to generate app names, which is basically what you suggested: keep arrays of adjectives and nouns in memory, pick an element from each at random and combine it with a random number from 1000 to 9999. Not the most thrilling code I've written, but it's interesting to see what we had to do in order to scale this: At first we were picking a name, trying to INSERT and then rescuing the uniqueness constraint error to pick a different name. This worked fine while we had a large pool of names (and a not-so-large set of apps using them), but at a certain scale we started to notice a lot of collisions during name generation. To make it more resilient we decided to pick several names and check which ones are still available with a single query. We obviously still need to check for errors and retry because of race conditions, but with so many apps in the table this is clearly more effective. It also has the added benefit of providing an easy hook for us to get an alert if our name pool is low (eg: if 1/3 of the random names are taken, send an alert). The first time we had issues with collisions we just radically increased the size of our name pool by going from 2 digits to 4. With 61 adjectives and 74 nouns this took us from ~400k to ~40mi names ( 61 * 74 * 8999 ). But by the time we were running 2 million apps we started receiving collision alerts again, and at a much higher rate than expected: About half of the names were colliding, what made no sense considering our pool size and amount of apps running. The culprit as you might have guessed is that rand is a pretty bad pseudorandom number generator . Picking random elements and numbers with SecureRandom instead radically lowered the amount of collisions, making it match what we expected in first place. With so much work going to scale this approach we had to ask whether there's a better way to generate names in first place. Some of the ideas discussed were: Make the name generation a function of the application id. This would be much faster and avoid the issue with collisions entirely, but on the downside it would waste a lot of names with deleted apps (and damn, we have A LOT of apps being created and deleted shortly after as part of different integration tests). Another option to make name generation deterministic is to have the pool of available names in the database. This would make it easy to do things like only reusing a name 2 weeks after the app was deleted. Excited to see what we'll do next time the collision alert triggers! Hope this helps anyone working on friendly name generation out there. | {} | {'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/7621341', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4233440/']} | jdg_74379 |
stackexchange | llm_judgeable_groundtruth_similarity | 37535 |
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I often read that the Lorentz symmetry is manifest in the path integral formulation but is not in the canonical quantization - what does this really mean?
Now provide the response and nothing else.
| Manifest Lorentz symmetry means that one can see Lorentz invariance directly from the way the theory is formulated; typically when space and time are treated on the same footing as components of a 4-vector. In these cases, the Lorentz group generators are represented in a simple way (hence the ''manifest'' symmetry), but it is far less trivial to find a corresponding Hilbert space of state vectors on which the interacting energy-momentum 4-vector acts. However, a theory can be Lorentz invariant in a more indirect way, such as in the canonical formalism, where a Hilbert space and an associated Hamiltonian is specified directly. Then Lorentz invariance is established by proving the (then far less trivial) existence of 6 generators satisfying the commutation relations for the Lorentz generators, such that the interacting Hamiltonian and the free momentum generators transform jointly as a 4-vector. | {} | {'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/37535', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/12268/']} | jdg_74380 |
stackexchange | llm_judgeable_groundtruth_similarity | 49132167 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Text fields in Vuetify have rules props, which take an array of functions returning true or an error string. How to make them async, so that the validation could be made server-side using XHR? Something like: <v-text-field :rules="[v => { axios.get('/check?value=' + val).then(() => { return true }) }]">
Now provide the response and nothing else.
| One solution is to set the error-messages prop: <v-text-field v-model="input" :error-messages="errors"> and use the watch option: new Vue({ data () { return { input: '', errors: [] } }, watch: { input (val) { axios.get('/check?value=' + val).then(valid => { this.errors = valid ? [] : ['async error'] }) } }}); | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/49132167', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/912946/']} | jdg_74381 |
stackexchange | llm_judgeable_groundtruth_similarity | 14681714 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This compiles, but I've never seen it in any other code. Is it safe? Testclass():sources(new int[32]){} Instead of: Testclass(){ sources = new int[32];}
Now provide the response and nothing else.
| Use: Testclass():sources(new int[32]){} This is using member-initialization-list which is the preferred way to initialize members. By "safe" or "okay" you probably meant, whether it is exception-safe? What if new throws the bad_alloc exception? Well, in that case, the destructor will not be called, because the object is not fully-constructed, as constructor-body is not executed. There may be a resource leak if you've acquired any resource in the initialization list. Consider this, class X{ int *ints; // Order of declaration matters! Huge *huges; // It ensures that huges will be initialized after ints X() : ints(new int[32]), huges(new Huge[10000]) {}}; If new Huge[10000] throws an exception, the memory allocated to ints will leak! In such cases, function-try-block can be useful. See these: Function try blocks, but not in constructors What is the purpose of a function try block? If you think about this problem of exception-safety, you will soon realize that if a class manages just one resource, then the life will be easier . If a single class manages more than one resource, then you wouldn't be able to decide which one threw an exception in the member-initialization-list, and consequently, you wouldn't be able to decide which one is to be deallocated in the catch block of function-try-block. A resource leak is destined. However, if a class needs more than one resource, then first you encapsulate each of the different type of resource in a class, and declare objects of these resource-managing class as a member of your class. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14681714', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3758484/']} | jdg_74382 |
stackexchange | llm_judgeable_groundtruth_similarity | 2138293 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I don't know a thing about ABAP, apart from it has an OO side, and I would like to have some kind of helicopterview of it before I start to look at it in detail. I know I can find all of this when studying it , but like I said for starters I like to know what I am dealing with. Is it (always) compiled? Typestem : Is it strongly typed? Does it use type inference? Inheritance : single / muliple, interface-like structures ? Collections : Has it collections apart from arrays ? Has it generic collections? Does it use List comprehension ? How about (con/contra/in)variance in arrays, returntypes, parametertypes, overriding? Any Exceptionhandling? Any build in design by contract support? Anything remarkable as oposed to other well known languages? ... Any general info about characteristics basically would be welcome!
Now provide the response and nothing else.
| Is it (always) compiled? ABAP is "compiled" into a sort of byte-code (called "load" for historical reasons) that is then executed by a virtual machine inside the kernel. You can compare this with Java, with one big difference: The load is not machine-independent, but optimized for the target machine type. This means that in a system landscape with several different types of application servers, you might have multiple pre-compiled loads for a single program. Not that you'll ever see any of this - the entire compilation process is handled automatically. Typestem : Is it strongly typed? Does it use type inference? Strongly typed with a system of generic types on top. Note that there's a complete data dictionary that is integrated into the language - very handy. Inheritance : single / multiple, interface-like structures ? Single inheritance. Interfaces are supported, including composite interfaces and implementation component renaming (two interfaces IF_FOO and IF_BAR can both define a method BAZ, and a class implementing both interfaces will then have two methods IF_FOO~BAZ and IF_BAR~BAZ). Collections : Has it collections apart from arrays? Has it generic collections? Does it use List comprehension? What you know as "array" in other programming languages does not really exist in ABAP - you'll usually use so-called "internal tables" instead. Think database-like structured in-memory tables. There are some ideas of collection classes spread out through different modules, but the canonical way to do this is to use internal tables - define a so called table type of lines that either represent references to instances or structures that contain such a reference. How about (con/contra/in)variance in arrays, returntypes, parametertypes, overriding? Arrays: see above. Overriding: You can not change the method signature when implementing interface method or overriding superclass methods. As for the parameters - that depends on whether you transfer data (or data references) or object references. In general, upcasting may happen implicitly while you have to perform the downcasting explicitly. Any Exceptionhandling? Yes. More than one way - again for the historical reasons (backward compatibility). Class-based exceptions are supported. Any build in design by contract support? None that I'm aware of. Anything remarkable as oposed to other well known languages? Lots of stuff. You might want to check http://www.volker-wegert.de/en/node/17 and http://www.volker-wegert.de/en/node/21 for an admittedly biased overview :-) | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/2138293', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/259080/']} | jdg_74383 |
stackexchange | llm_judgeable_groundtruth_similarity | 13851535 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a pandas DataFrame and I want to delete rows from it where the length of the string in a particular column is greater than 2. I expect to be able to do this (per this answer ): df[(len(df['column name']) < 2)] but I just get the error: KeyError: u'no item named False' What am I doing wrong? (Note: I know I can use df.dropna() to get rid of rows that contain any NaN , but I didn't see how to remove rows based on a conditional expression.)
Now provide the response and nothing else.
| To directly answer this question's original title "How to delete rows from a pandas DataFrame based on a conditional expression" (which I understand is not necessarily the OP's problem but could help other users coming across this question) one way to do this is to use the drop method: df = df.drop(some labels)df = df.drop(df[<some boolean condition>].index) Example To remove all rows where column 'score' is < 50: df = df.drop(df[df.score < 50].index) In place version (as pointed out in comments) df.drop(df[df.score < 50].index, inplace=True) Multiple conditions (see Boolean Indexing ) The operators are: | for or , & for and , and ~ for not . These must begrouped by using parentheses. To remove all rows where column 'score' is < 50 and > 20 df = df.drop(df[(df.score < 50) & (df.score > 20)].index) | {} | {'log_upvote_score': 10, 'links': ['https://Stackoverflow.com/questions/13851535', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1080717/']} | jdg_74384 |
stackexchange | llm_judgeable_groundtruth_similarity | 29809524 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
The scrapy framework ( https://github.com/scrapy/scrapy ) provides a library for use when logging into websites that require authentication, https://github.com/scrapy/loginform . I have looked through the docs for both programs however I cannot seem to figure out how to get scrapy to call loginform before running. The login works fine with just loginform. Thanks
Now provide the response and nothing else.
| loginform is just a library, totally decoupled from Scrapy. You have to write the code to plug it in the spider you want, probably in a callback method. Here is an example of a structure to do this: import scrapyfrom loginform import fill_login_formclass MySpiderWithLogin(scrapy.Spider): name = 'my-spider' start_urls = [ 'http://somewebsite.com/some-login-protected-page', 'http://somewebsite.com/another-protected-page', ] login_url = 'http://somewebsite.com/login-page' login_user = 'your-username' login_password = 'secret-password-here' def start_requests(self): # let's start by sending a first request to login page yield scrapy.Request(self.login_url, self.parse_login) def parse_login(self, response): # got the login page, let's fill the login form... data, url, method = fill_login_form(response.url, response.body, self.login_user, self.login_password) # ... and send a request with our login data return scrapy.FormRequest(url, formdata=dict(data), method=method, callback=self.start_crawl) def start_crawl(self, response): # OK, we're in, let's start crawling the protected pages for url in self.start_urls: yield scrapy.Request(url) def parse(self, response): # do stuff with the logged in response | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29809524', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2484690/']} | jdg_74385 |
stackexchange | llm_judgeable_groundtruth_similarity | 5368048 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This is mostly an out-of-curiosity question. Consider the following functions var closure ;function f0() { var x = new BigObject() ; var y = 0 ; closure = function(){ return 7; } ;}function f1() { var x = BigObject() ; closure = (function(y) { return function(){return y++;} ; })(0) ;}function f2() { var x = BigObject() ; var y = 0 ; closure = function(){ return y++ ; } ;} In every case, after the function has been executed, there is (I think) no way to reach x and so the BigObject can be garbage collected, as long as x is the last reference to it. A simple minded interpreter would capture the whole scope chain whenever a function expression is evaluated. (For one thing, you need to do this to make calls to eval work -- example below). A smarter implementation might avoid this in f0 and f1. An even smarter implementation would allow y to be retained, but not x , as is needed for f2 to be efficient. My question is how do the modern JavaScript engines (JaegerMonkey, V8, etc.) deal with these situations? Finally, here is an example that shows that variables may need to be retained even if they are never mentioned in the nested function. var f = (function(x, y){ return function(str) { return eval(str) ; } } )(4, 5) ;f("1+2") ; // 3f("x+y") ; // 9f("x=6") ;f("x+y") ; // 11 However, there are restrictions that prevent one from sneaking in a call to eval in ways that might be missed by the compiler.
Now provide the response and nothing else.
| It's not true that there are restrictions that prevent you from calling eval that would be missed by static-analysis: it's just that such references to to eval run in the global scope. Note that this is a change in ES5 from ES3 where indirect and direct references to eval both ran in the local scope, and as such, I'm unsure whether anything actually does any optimizations based upon this fact. An obvious way to test this is to make BigObject be a really big object, and force a gc after running f0–f2. (Because, hey, as much as I think I know the answer, testing is always better!) So… The test var closure;function BigObject() { var a = ''; for (var i = 0; i <= 0xFFFF; i++) a += String.fromCharCode(i); return new String(a); // Turn this into an actual object}function f0() { var x = new BigObject(); var y = 0; closure = function(){ return 7; };}function f1() { var x = new BigObject(); closure = (function(y) { return function(){return y++;}; })(0);}function f2() { var x = new BigObject(); var y = 0; closure = function(){ return y++; };}function f3() { var x = new BigObject(); var y = 0; closure = eval("(function(){ return 7; })"); // direct eval}function f4() { var x = new BigObject(); var y = 0; closure = (1,eval)("(function(){ return 7; })"); // indirect eval (evaluates in global scope)}function f5() { var x = new BigObject(); var y = 0; closure = (function(){ return eval("(function(){ return 7; })"); })();}function f6() { var x = new BigObject(); var y = 0; closure = function(){ return eval("(function(){ return 7; })"); };}function f7() { var x = new BigObject(); var y = 0; closure = (function(){ return (1,eval)("(function(){ return 7; })"); })();}function f8() { var x = new BigObject(); var y = 0; closure = function(){ return (1,eval)("(function(){ return 7; })"); };}function f9() { var x = new BigObject(); var y = 0; closure = new Function("return 7;"); // creates function in global scope} I've added tests for eval/Function, seeming these are also interesting cases. The different between f5/f6 is interesting, because f5 is really just identical to f3, given what is really an identical function for closure; f6 merely returns something that once evaluated gives that, and as the eval hasn't yet been evaluated, the compiler can't know that there is no reference to x within it. SpiderMonkey js> gc();"before 73728, after 69632, break 01d91000\n"js> f0();js> gc(); "before 6455296, after 73728, break 01d91000\n"js> f1(); js> gc(); "before 6455296, after 77824, break 01d91000\n"js> f2(); js> gc(); "before 6455296, after 77824, break 01d91000\n"js> f3(); js> gc(); "before 6455296, after 6455296, break 01db1000\n"js> f4(); js> gc(); "before 12828672, after 73728, break 01da2000\n"js> f5(); js> gc(); "before 6455296, after 6455296, break 01da2000\n"js> f6(); js> gc(); "before 12828672, after 6467584, break 01da2000\n"js> f7(); js> gc(); "before 12828672, after 73728, break 01da2000\n"js> f8(); js> gc(); "before 6455296, after 73728, break 01da2000\n"js> f9(); js> gc(); "before 6455296, after 73728, break 01da2000\n" SpiderMonkey appears to GC "x" on everything except f3, f5, and f6. It appears to as much as possible (i.e., when possible, y, as well as x) unless there is direct eval call within the scope-chain of any function that still exists. (Even if that function object itself has been GC'd and no longer exists, as is the case in f5, which theoretically means that it could GC x/y.) V8 gsnedders@dolores:~$ v8 --expose-gc --trace_gc --shell foo.jsV8 version 3.0.7> gc();Mark-sweep 0.8 -> 0.7 MB, 1 ms.> f0();Scavenge 1.7 -> 1.7 MB, 2 ms.Scavenge 2.4 -> 2.4 MB, 2 ms.Scavenge 3.9 -> 3.9 MB, 4 ms.> gc(); Mark-sweep 5.2 -> 0.7 MB, 3 ms.> f1();Scavenge 4.7 -> 4.7 MB, 9 ms.> gc();Mark-sweep 5.2 -> 0.7 MB, 3 ms.> f2();Scavenge 4.8 -> 4.8 MB, 6 ms.> gc();Mark-sweep 5.3 -> 0.8 MB, 3 ms.> f3();> gc();Mark-sweep 5.3 -> 5.2 MB, 17 ms.> f4();> gc();Mark-sweep 9.7 -> 0.7 MB, 5 ms.> f5();> gc();Mark-sweep 5.3 -> 5.2 MB, 12 ms.> f6();> gc();Mark-sweep 9.7 -> 5.2 MB, 14 ms.> f7();> gc();Mark-sweep 9.7 -> 0.7 MB, 5 ms.> f8();> gc();Mark-sweep 5.2 -> 0.7 MB, 2 ms.> f9();> gc();Mark-sweep 5.2 -> 0.7 MB, 2 ms. V8 appears to GC x on everything apart from f3, f5, and f6. This is identical to SpiderMonkey, see analysis above. (Note however that the numbers aren't detailed enough to tell whether y is being GC'd when x is not, I've not bothered to investigate this.) Carakan I'm not going to bother running this again, but needless to say behaviour is identical to SpiderMonkey and V8. Harder to test without a JS shell, but doable with time. JSC (Nitro) and Chakra Building JSC is a pain on Linux, and Chakra doesn't run on Linux. I believe JSC has the same behaviour to the above engines, and I'd be surprised if Chakra didn't have too. (Doing anything better quickly becomes very complex, doing anything worse, well, you'd almost never be doing GC and have serious memory issues…) | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/5368048', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/667690/']} | jdg_74386 |
stackexchange | llm_judgeable_groundtruth_similarity | 3088059 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to link a UILabel with an IBOutlet created in my class. My application is crashing with the following error. What does this mean? How can I fix it? *** Terminating app due to uncaught exception 'NSUnknownKeyException', reason: '[<UIViewController 0x6e36ae0> setValue:forUndefinedKey:]: this class is not key value coding-compliant for the key XXX.'
Now provide the response and nothing else.
| Your view controller may have the wrong class in your xib. I downloaded your project. The error you are getting is 'NSUnknownKeyException', reason: '[<UIViewController 0x3927310> setValue:forUndefinedKey:]: this class is not key value coding-compliant for the key string.' It is caused by the Second view controller in MainWindow.xib having a class of UIViewController instead of SecondView . Changing to the correct class resolves the problem. By the way, it is bad practice to have names like "string" in Objective-C. It invites a runtime naming collision. Avoid them even in once off practice apps. Naming collisions can be very hard to track down and you don't want to waste the time. Another possible reason for this error: when copying & pasting elements from one controller into another, Xcode somehow keeps that link to the original controller, even after editing & relinking this element into the new controller. Another possible reason for this error: Bad Outlet. You have either removed or renamed an outlet name in your .h file. Remove it in .xib or .storyboard file's Connection Inspector. One more possible reason (In my case) Extension of UIView with bindable properties and setting values for those bindable properties (i.e. shadow, corner radius etc.) then remove those properties from UIView extension (for some reason) but the following <userDefinedRuntimeAttributes> remained in xml (of foo.storyboard ): <userDefinedRuntimeAttributes> <userDefinedRuntimeAttribute type="color" keyPath="shadowColor"> <color key="value" white="0.0" alpha="1" colorSpace="custom" customColorSpace="genericGamma22GrayColorSpace"/> </userDefinedRuntimeAttribute> <userDefinedRuntimeAttribute type="number" keyPath="shadowOpacity"> <real key="value" value="50"/> </userDefinedRuntimeAttribute> <userDefinedRuntimeAttribute type="point" keyPath="shadowOffset"> <point key="value" x="5" y="5"/> </userDefinedRuntimeAttribute> <userDefinedRuntimeAttribute type="number" keyPath="shadowRadius"> <real key="value" value="16"/> </userDefinedRuntimeAttribute> <userDefinedRuntimeAttribute type="number" keyPath="borderWidthValue"> <real key="value" value="0.0"/> </userDefinedRuntimeAttribute></userDefinedRuntimeAttributes> Solution: Right click on foo.storyboard > Open as Source Code > search by keyPath (i.e. shadowRadius) > Delete the </userDefinedRuntimeAttributes> that causing the problem | {} | {'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/3088059', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/372516/']} | jdg_74387 |
stackexchange | llm_judgeable_groundtruth_similarity | 17002969 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
How to convert program parameter from argv to int64_t ? atoi() is suitable only for 32 bit integers.
Now provide the response and nothing else.
| A C99 conforming attempt. [edit] employed @R. correction // Note: Typical values of SCNd64 include "lld" and "ld".#include <inttypes.h>#include <stdio.h>int64_t S64(const char *s) { int64_t i; char c ; int scanned = sscanf(s, "%" SCNd64 "%c", &i, &c); if (scanned == 1) return i; if (scanned > 1) { // TBD about extra data found return i; } // TBD failed to scan; return 0; }int main(int argc, char *argv[]) { if (argc > 1) { int64_t i = S64(argv[1]); printf("%" SCNd64 "\n", i); } return 0;} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17002969', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/922016/']} | jdg_74388 |
stackexchange | llm_judgeable_groundtruth_similarity | 30756488 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
While using Android Studio on Ubuntu there are scenarios where the IDE will stop responding to keyboard input or start inputting random characters? How to fix this? Or possible workarounds.
Now provide the response and nothing else.
| This is a known issue with : The bug seems to be triggered by some missing synchronization between IBus (the server) and Xlib + AWT (the client JVM), exposed by a combination of a quick succession of key presses and the client’s slower event handling. The 2 workarounds that helped me were: Workaround #1 : Force ibus in synchronous mode $ IBUS_ENABLE_SYNC_MODE=1 ibus-daemon -xrd Do this preferably before starting Studio. This workaround was suggested in https://code.google.com/p/ibus/issues/detail?id=1733 for a different Java application facing the same problems. Workaround #2: Disable IBus input in Studio $ XMODIFIERS= ./bin/studio.sh This will only disable input methods for Studio, not the other applications. Restarting the daemon while Studio is running (‘ibus-daemon -rd’) effectively disables the input methods for all other applications, and can also crash Studio's JVM with a segmentation fault. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/30756488', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4173071/']} | jdg_74389 |
stackexchange | llm_judgeable_groundtruth_similarity | 31767 |
Below is a question asked on the forum politics.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am not sure what the criteria are to be a permanent member of the UN Security Council , but I do wonder how did Russia manage to keep its veto power after the dissolution of the Soviet Union? I think, after what happened to the USSR in 1991, the country lost some of its power and even its name changed. It was an opportunity for the other countries such as USA, China, France, and the United Kingdom to get rid of Russia once and for all from the UN Security Council, but that did not happen. Why not?
Now provide the response and nothing else.
| Essentially, you got a permanent seat on the Security Council if you were one of the major powers who won WW2 and went about setting up the post-war peace organisation, i.e the United Nations. When the Soviet Union (which was a union of multiple soviet republics) dissolved, Russia claimed itself as the successor state on the grounds it contained 51% of the population and 77% of the territory of the Soviet Union. They thus agreed to inherit all international treaties and responsibilities of the Soviet Union and were internationally recognised as such . As such, it was perfectly legitimate for them to inherit the seat on the Security Council. A similar example might be if Scotland had left the UK, the remaining state would not have lost its permanent seat. As the UN says in this article All international agreements, such as those governing membership of the UN Security Council, relate to a nation as a legal entity. Even if that nation changes it name, has a part of it split off and declare independence, or undergoes a revolution or any other form of change of government, that nation is still considered to be the same legal entity. It is still bound by all the same laws and treaties as before and it still enjoys the same statuses as before. The article goes on to say ...when the USSR broke up in 1991 Russia successfully argued that it should be recognised as the continuing state and so it inherited, among other things, the USSR’s permanent seat on the Security Council. This is partially because Russia received the backing of lots of the other former soviet states to remain as the successor state. Russia inherited all the Soviet Union's assets and also its foreign debt of US$70bn . | {} | {'log_upvote_score': 7, 'links': ['https://politics.stackexchange.com/questions/31767', 'https://politics.stackexchange.com', 'https://politics.stackexchange.com/users/5785/']} | jdg_74390 |
stackexchange | llm_judgeable_groundtruth_similarity | 16974683 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am currently using the following: $a = array('foo' => 'bar', 'bar' => 'foo'); if(isset($a['foo']) && isset($a['bar'])){ echo 'all exist'; } However, I will have several more array keys than foo and bar that I must check for. Is there a more efficient way to check for each required key than adding an isset for each required entry?
Now provide the response and nothing else.
| You can combine them in a single isset() call: if (isset($a['foo'], $a['bar']) { echo 'all exist';} If you have an array of all the keys that are required, you can do: if (count(array_diff($required_keys, array_keys($a))) == 0) { echo 'all exist';} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16974683', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1234256/']} | jdg_74391 |
stackexchange | llm_judgeable_groundtruth_similarity | 29055654 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
We have those convenient functions in calendar: let calendar = NSCalendar.currentCalendar()calendar.isDateInToday(date)calendar.isDateInTomorrow(date) But I am missing those two: calendar.isDateInNextWeek(date) calendar.isDateInNextMonth(date) As @Rob mentioned, the meaning is: "in the calendar week starting this coming Sunday, going through the following Saturday" I having a hard time figuring out how to implement those functions in a robust way that covers all the corner cases. Can someone provide an assistant?
Now provide the response and nothing else.
| Problem solved using Swift 5 and extending Calendar . extension Calendar { private var currentDate: Date { return Date() } func isDateInThisWeek(_ date: Date) -> Bool { return isDate(date, equalTo: currentDate, toGranularity: .weekOfYear) } func isDateInThisMonth(_ date: Date) -> Bool { return isDate(date, equalTo: currentDate, toGranularity: .month) } func isDateInNextWeek(_ date: Date) -> Bool { guard let nextWeek = self.date(byAdding: DateComponents(weekOfYear: 1), to: currentDate) else { return false } return isDate(date, equalTo: nextWeek, toGranularity: .weekOfYear) } func isDateInNextMonth(_ date: Date) -> Bool { guard let nextMonth = self.date(byAdding: DateComponents(month: 1), to: currentDate) else { return false } return isDate(date, equalTo: nextMonth, toGranularity: .month) } func isDateInFollowingMonth(_ date: Date) -> Bool { guard let followingMonth = self.date(byAdding: DateComponents(month: 2), to: currentDate) else { return false } return isDate(date, equalTo: followingMonth, toGranularity: .month) }} Usage: let day: Double = 60 * 60 * 24let currentDate = Date() // Thu 25 Apr 2019let futureDate = Date(timeInterval: 3 * day, since: currentDate) // Sun 28 Apr 2019if Calendar.current.isDateInThisWeek(futureDate) { // this will be executed if first day of week is Monday} else { // this will be executed if first day of week is Sunday} Code evaluates if the dates are in the range (sameWeek, nextWeek) according to Calendar instance. If Calendar instance is current it will determine start of the week (Monday or Sunday) according to device settings, but if you want different behaviour you can change Calendar instance: Calendar(identifier: .chinese).isDateInNextWeek(someDate) This code also works if we're trying to create DateComponents with improper values like so: DateComponents(year: 2019, month: 13) .In this case it creates date 01 Jan 2020 . | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/29055654', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/412992/']} | jdg_74392 |
stackexchange | llm_judgeable_groundtruth_similarity | 483253 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a few Cloud Services, and a VM running Redis in Azure. From what I understand I need to create a Virtual Network so the cloud services can communicate to Redis on the VM. That was easy. Now what I would like to do is set up DNS so I don't have to specify IP Addresses everywhere. The articles I am finding all deal with integrating an on site DNS server, but I don't have that. Is there anyway to use godaddy, or dnsimple for this? What about just installing a simple DNS service on the current VM? I guess my question can be summed up as what do I need to do to make it so my cloud services can communicate to my VN via a DNS name?
Now provide the response and nothing else.
| There are some "hidden" features of VN in Azure that will help you. First of all, yes, you are correct. If you create a Virtual Network, name resolution will not work unless you provide your own DNS Server and set it up to allow dynamic updates. You can't use public DNS Services to provide DNS name resolution for Windows Azure Virtual Network. So here is your solution. You must start "Clean" because you cannot change DNS Server IP Address once VNet has running Virtual Machines in it. Create the VNet as usual (and its subnets) Provide DNS Server address. Set this address to be xxx.xxx.xxx.4 (4 will always be the first IP Address assigned in a given SubNet!) ! On that clean VNet, Create a new VM with Windows Server. That VM will get xxx.xxx.xxx.4 IP Address Install and configure DNS Server Role on this machine DO NOT SET STATIC IP ADDRESS OF THIS MACHINE! Create rest of the VMs as usual Things to be aware of: Never assign static IP Addresses to any VM in Windows Azure. They must have DHCP allocated IP Addresses Be careful to first create the DNS Machine, so that it get .4 IP Address Your DNS VM will preserve its IP Address (.4) as long as you do not DELETE it. This architecture has proven to be valid and works within Windows Azure IaaS (a.k.a. Virtual Machines) | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/483253', 'https://serverfault.com', 'https://serverfault.com/users/-1/']} | jdg_74393 |
stackexchange | llm_judgeable_groundtruth_similarity | 9536 |
Below is a question asked on the forum datascience.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
An 1024*1024 pixel image has around one million pixels. If I would like to connect each pixel to an R,G,B input neuron, then more than 3 million neurons are needed. It would be really hard, to train a neural network, which has millions of inputs. How is it possible, to reduce the number of neurons?
Now provide the response and nothing else.
| There are several ways to make this big number trainable: Use CNNs Auto-Encoders (see Reducing the Dimensionality of Data with Neural Networks ) Dimensionality reduction of the input Scale the image down PCA / LDA Troll-Answer If you really meant "only a few neurons" then you might want to have a look at Spiking neural networks . Those are incredibly computationally intensive, need a lot of hand-crafting and still get worse performance than normal neural networks for most tasks ... but you only need very little of them. | {} | {'log_upvote_score': 4, 'links': ['https://datascience.stackexchange.com/questions/9536', 'https://datascience.stackexchange.com', 'https://datascience.stackexchange.com/users/15024/']} | jdg_74394 |
stackexchange | llm_judgeable_groundtruth_similarity | 3827169 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
This problem and its solution were given at class. I don't understand why the solution is the solution. Could someone please explain in detail? I am sorry for not using MathJax: it isn't working for me, this didn't help. Problem:find an example of an exists-sentence (a formula where every variable is under the existence quantifier) such that the sentence is true on an infinite model $M$ (i.e. a model with an infinite carrier), yet on every submodel of $M$ , the sentence is false. The signature to use was told to be an equivalence relation ~, and two different unary operators $f$ and $g$ . The answer to the problem is $\exists x\exists y \neg(x\sim y)$ . Now, suppose $N<M$ , $a$ and $b$ are in $N$ , and $a\sim b$ . The smallest submodel = $\{a, b, f(a), f(f(a)),\dots, f(b), f(f(b)), \dots, g(a), g(g(a)), \dots, g(b), g(g(b)), \dots\}$ . I don't see how two non-equivalent elements could not exist in $N$ . Is my reasoning wrong? Could I have missed something obvious in class? Thank you.
Now provide the response and nothing else.
| As Noah and Eric pointed out, the statement of the proplem is missing the word "proper" (the sentence should be false only on the proper substructures of $M$ , since $M$ is alwaays a substructure of itself). And the problem can be solved vacuously by considering a structure $M$ with no proper substructures. The solution as you described it makes no sense. Here's an example which does have proper substructures and which I believe is similar in spirit to the intention of the proposed solution (but simpler). Consider the language $\{P,f\}$ , where $P$ is a unary relation symbol and $f$ is a unary function symbol. Let $M = \mathbb{N}$ , where $P^M$ holds only of $0$ and $f^M$ is the successor function $f^M(n) = n+1$ . The substructures of $M$ are of the form $\{k,k+1,k+2,\dots\}$ for any $k$ . Consider the sentence $\exists x\, P(x)$ . This sentence is true in $M$ (witnessed by $0$ ), but false in every proper substructure of $M$ (since no proper substructure of $M$ contains $0$ ). | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3827169', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/540252/']} | jdg_74395 |
stackexchange | llm_judgeable_groundtruth_similarity | 48928930 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm using CloseableHttpResponse (from apache-httpclient-4.5.3) and I'm not sure I'm using it right, I saw an answer with no votes to use EntityUtils.consume on finally: CloseableHttpResponse response1 = httpclient.execute(httpGet);try { System.out.println(response1.getStatusLine());} finally { EntityUtils.consume(response1.getEntity()); CloseableHttpClient is abstract and has no close method to call although in this answer it's used: CloseableHttpResponse response = httpclient.execute(httpget);try { //do something} finally { response.close();} Currently I'm using try with resources for CloseableHttpClient and CloseableHttpResponse inside of send method. Am I not missing any resource open or using it in a wrong way? private CloseableHttpResponse send() throws URISyntaxException, UnsupportedEncodingException, IOException, ClientProtocolException { URIBuilder uriBuilder = new URIBuilder(BASE_URL); HttpHost target = new HttpHost(uriBuilder.getHost(), uriBuilder.getPort(), uriBuilder.getScheme()); HttpPost post = new HttpPost(uriBuilder.build()); try (CloseableHttpClient httpClient = HttpClients.custom().build(); CloseableHttpResponse response = httpClient.execute(target, post)) { return response; }
Now provide the response and nothing else.
| It has been explained in detail in the docs here . Quoting the pseudo code from the docs here's a typical way to allocate/deallocate an instance of CloseableHttpClient : try (CloseableHttpClient httpclient = HttpClients.createDefault()) { <...>} The same applies to CloseableHttpResponse : try (CloseableHttpResponse response = httpclient.execute(httpget)) { <...>} Now, about the close method in CloseableHttpClient . CloseableHttpClient is an abstract class that implements Closeable interface. That is, although it doesn't have a close method itself the classes that extend it are required to implement the close method. One class is InternalHttpClient . You can check the source code for the details. Before Java7, explicit close would be required: CloseableHttpClient httpclient = HttpClients.createDefault();try { <...>} finally { httpclient.close();}CloseableHttpResponse response = httpclient.execute(httpget);try { <...>} finally { response.close();} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48928930', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7294900/']} | jdg_74396 |
stackexchange | llm_judgeable_groundtruth_similarity | 8698285 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When to use volatile keyword vs synchronization in multithreading?
Now provide the response and nothing else.
| Use volatile to guarantee that each read access to a variable will see the latest value written to that variable. Use synchronized whenever you need values to be stable for multiple instructions. (Note that this does not necessarily mean multiple statements; the single statement: var++; // NOT thread safe! is not thread-safe even if var is declared volatile . You need to do this: synchronized(LOCK_OBJECT){var++;} See here for a nice summary of this issue. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8698285', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1077644/']} | jdg_74397 |
stackexchange | llm_judgeable_groundtruth_similarity | 1480 |
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would.
Question:
I recall being told -- at tea, once upon a time -- that there exist models of the real numbers which have no unmeasurable sets. This seems a bit bizarre; since any two models of the reals are isomorphic, you'd expect any two models to have the same collection of subsets. Can anyone tell me exactly what the story here is? Have I misremembered something? Is this some subtlety involving how strong a choice axiom you use to define your set theory?
Now provide the response and nothing else.
| As John Goodrick is asking in a few places, you have to be careful in stating what you mean by "a model of the reals". If you're going to talk about sets of reals, then you need to have variables ranging over reals, and also variables ranging over sets of reals. You also of course want symbols in your language for the field operations and ordering, and possibly more. Three Options One way to do this is to use the language of second-order analysis, which is bi-interpretable with the language of third-order number theory. (It's straightforward to translate between real numbers and sets of natural numbers, and then between sets of real numbers and sets of sets of naturals.) Another way to do this is to use ZF, which talks about the reals and sets of reals, but also many many other things. (Far more than any mathematician who's not a logician (or perhaps category theorist?) ever uses.) There's also an intermediate strategy, which is basically what Russell and Whitehead did in Principia Mathematica, where you have some variables ranging over objects at the bottom (which might be real numbers, or anything else), and then variables ranging over sets of objects, and then variables ranging over sets of sets of objects, and so on to arbitrarily high levels. This is still far weaker than ZF, because you don't get sets that mix levels, and you also can't make sense of infinitely high levels. First-order and Higher-order logic If you take the first or third option, then you have two more choices, which correspond to what David Speyer was saying. You can require that variables that range over sets of things range over "honest subsets" of the collection of things they're supposed to be sets of. Or you can interpret the set variables in a "whacked model". (The technical term is a "Henkin model".) On this interpretation, the "sets" are just further objects in your domain, and "membership" is just interpreted as some arbitrary relation between the objects of one type and the objects of the "set" type, and you interpret all your axioms in first-order logic. The difference is that the honest interpretation uses second-order logic, while the Henkin interpretation just uses first-order logic. Second-order logic (and higher-order logic) is nice in that it lets you prove all sorts of uniqueness results - there is a unique model of honest second order Peano arithmetic, and if you require honest set-hood then this means there will be unique models at the third order level and higher, giving you one result that you remember. But first-order logic is nice because there's actually a proof system - that is, there is a set of rules for manipulating sentences such that any sentence true in every first-order model can actually be reached by doing these manipulations. That is, Gödel's Completeness Theorem applies. However, his Incompleteness Theorems also apply - thus, there are lots of models of first-order Peano arithmetic, and then there are even more Henkin models of "second-order" Peano arithmetic, and far far more Henkin models of "third-order" Peano arithmetic, which is the theory you're interested in. Unfortunately, I don't know what these Henkin models look like. It all depends on what set existence axioms you use. There's a lot of discussion of this stuff for "second-order" Peano arithmetic in Steven Simpson's book Subsystems of Second-Order Arithmetic , which is the canonical text of the field known as reverse mathematics. However, none of that talks about arbitrary sets of reals, which is what you're interested in. Solovay's results The other result you mention, which is cited in one of the other answers here, takes the other option from above. That is, we do everything in ZF and see what different models of ZF are like. (Note that I don't say ZFC - of course if you have choice, then you have non-measurable sets of reals.) Every model of ZF has a set it calls ω, which is the set it thinks of as "the natural numbers". Set theorists then talk about the powerset of this set as "the real numbers" - you might prefer to think of this set as "the Cantor set", and some other object in the model of ZF as its "real numbers", but there will be some nice translation between the Cantor set and your set, that gives the relevant topological and measure-theoretic properties. Of course, since we're just talking about models of ZF, none of this is going to be the real real numbers. After all, since ZF is a first-order theory, the Löwenheim-Skolem theorem guarantees that it has a countable model. This model thinks that its "real numbers" are uncountable, but that's just because the model doesn't know what uncountable really means. (This is called Skolem's Paradox - http://en.wikipedia.org/wiki/Skolem%27s_paradox>wikipedia, http://plato.stanford.edu/entries/paradox-skolem/>Stanford Encyclopedia of Philosophy.) What Solovay showed is that if you start with a countable model of ZFC that has an inaccessible cardinal (assuming that inaccessibles are consistent, then there is such a model, and we have almost as much reason to believe that inaccessibles are consistent as we do to believe that ZFC is consistent) then you can use Cohen's method of forcing to construct a different (countable) model of ZF where there are no unmeasurable sets of "reals". Of course, the first result you stated (that any two models of the reals are isomorphic) holds within any model of set theory, assuming you're talking about "honest" second-order models (that is, models of reals that are "honest" with respect to the notion of "subset" that you get from the ambient model of ZF). But the notion of "honest" second-order model doesn't even translate when you move from one model of set theory to another. So Solovay's model of ZF has the property that every "honest" model of second-order analysis (or third-order number theory) has no non-measurable sets, while any model of ZFC has the property that every "honest" model of second-order analysis (or third-order number theory) does have non-measurable sets. That's how your two results are consistent. | {} | {'log_upvote_score': 6, 'links': ['https://mathoverflow.net/questions/1480', 'https://mathoverflow.net', 'https://mathoverflow.net/users/35508/']} | jdg_74398 |
stackexchange | llm_judgeable_groundtruth_similarity | 610192 |
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm considering options for changing a 'high availability' website that provides a service via an https api. The current setup is: Two self-contained VMs, from different cloud providers (AWS and RackSpace) One DNS load-balancer: this is where the HA also comes in, the service monitors the two VMs and if one appears to be unavailable it directs all DNS queries to the other If load-balancing was not a requirement, could we do without the load-balancer by simply co-locating the DNS servers on the two machines, each replying only with it's own address when queried by DNS. In this scenario, if one VM is down that will remove both the service and the DNS server that points at the service so no clients will be directed to the server that is down, is that correct? edit for clarity: we are happy with the less-than-perfect 'HA' we currently have, this question is specifically about whether the changes I'm thinking of will make things worse or not.
Now provide the response and nothing else.
| The direct answer to your question is Yes, it will make it worse. This is because one of your name servers not responding will cause resolve delays all the time for clients who attempt to resolve via the failed name server, whereas the current technique will only fail +- half the clients until you detect the VM is down + TTL seconds. Generally name servers are cached for 48 hours so during the lower of your downtime or name server updates + 48 hours, your users will have a randomly slow experience. Your current implementation is better unless your VM down detection is slow. For the period between the VM going down and you detecting it + TTL the proposed solution will actually be better. But I am assuming that that is so small a period of time as to be ignorable. | {} | {'log_upvote_score': 4, 'links': ['https://serverfault.com/questions/610192', 'https://serverfault.com', 'https://serverfault.com/users/-1/']} | jdg_74399 |
stackexchange | llm_judgeable_groundtruth_similarity | 415407 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm working on making a simple server application with python, and I'm trying to get the IP to bind the listening socket to. An example I looked at uses this: HOST = gethostbyaddr(gethostname()) With a little more processing after this, it should give me just the host IP as a string. This should return the IPv4 address. But when I run this code, it returns my IPv6 address. Why does it do this and how can I get my IPv4 address? If its relevant, I'm using windows vista and python 2.5
Now provide the response and nothing else.
| Getting your IP address is harder than you might think. Check this answer I gave for the one reliable way I've found. Here's what the answer says in case you don't like clicking on things: Use the netifaces module. Because networking is complex, using netifaces can be a little tricky, but here's how to do what you want: >>> import netifaces>>> netifaces.interfaces()['lo', 'eth0']>>> netifaces.ifaddresses('eth0'){17: [{'broadcast': 'ff:ff:ff:ff:ff:ff', 'addr': '00:11:2f:32:63:45'}], 2: [{'broadcast': '10.0.0.255', 'netmask': '255.255.255.0', 'addr': '10.0.0.2'}], 10: [{'netmask': 'ffff:ffff:ffff:ffff::', 'addr': 'fe80::211:2fff:fe32:6345%eth0'}]}>>> for interface in netifaces.interfaces():... print netifaces.ifaddresses(interface)[netifaces.AF_INET]...[{'peer': '127.0.0.1', 'netmask': '255.0.0.0', 'addr': '127.0.0.1'}][{'broadcast': '10.0.0.255', 'netmask': '255.255.255.0', 'addr': '10.0.0.2'}]>>> for interface in netifaces.interfaces():... for link in netifaces.ifaddresses(interface)[netifaces.AF_INET]:... print link['addr']...127.0.0.110.0.0.2 This can be made a little more readable like this: from netifaces import interfaces, ifaddresses, AF_INETdef ip4_addresses(): ip_list = [] for interface in interfaces(): for link in ifaddresses(interface)[AF_INET]: ip_list.append(link['addr']) return ip_list If you want IPv6 addresses, use AF_INET6 instead of AF_INET . If you're wondering why netifaces uses lists and dictionaries all over the place, it's because a single computer can have multiple NICs, and each NIC can have multiple addresses, and each address has its own set of options. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/415407', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2128/']} | jdg_74400 |
stackexchange | llm_judgeable_groundtruth_similarity | 2767247 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Solve $$u_t=ku_{xx}\\u(x,0)=g(x)$$ for $t\ge 0, -\infty<x<\infty$, where $g(x) = e^{-2x}$. We have that the solution is given by $u(x,t) = \frac{1}{\sqrt{4\pi kt}}\int_{-\infty}^\infty e^{-(x-y)^2/4kt} \cdot e^{-2y} dy$ . I'd like to apply the change of variable $-p^2=-2y-\frac{(x-y)^2}{4kt}$ to arrive to something like $\int e^{-p^2}$ and then to write the answer in terms of Erf(z) function, but I'm having difficulties trying to find the limits of the integral after the change of variable. The problem is the infinity symbol, because I get $p=-2(\infty+\frac{(x-y)^2}{8ky})$ which I don't know how to interpret. What to do in this case?
Now provide the response and nothing else.
| Your change of variables won't work because you will still have $y$ in the integral. To show this, take the infinitesimal change of $p$$$dp=\left(2y+\frac{(x-y)^2}{4kt}\right)^{-1/2}\left(1+\frac{y-x}{4kt}\right)dy$$Instead, you should expand $(x-y)^2$ and attempt to reduce the problem to a version of the previous question you asked here .$$\begin{align}2y+\frac{(x-y)^2}{4kt} &= 2y + \frac{1}{4kt}\left(x^2-2xy+y^2\right) = \frac{x^2}{4kt} + y\left(2-\frac{x}{2kt}\right)+\frac{y^2}{4kt}\\&= \frac{x^2}{4kt}+y\left(\frac{8kt}{4kt}-\frac{2x}{4kt}\right)+\frac{y^2}{4kt}\\&= \frac{1}{4kt}\left[x^2-(x-4kt)^2+(x-4kt)^2-2y(x-4kt)+y^2\right]\\&= \frac{1}{4kt}\left[x^2-(x-4kt)^2+(x-4kt-y)^2\right]\\&= 2x-4kt+\frac{(x-4kt-y)^2}{4kt}\end{align}$$Let's plug this in and take a look at what the integral looks like now$$u(x,t)=\frac{1}{\sqrt{4\pi kt}}\int_{-\infty}^\infty e^{-\frac{(x-y)^2}{4kt}}e^{-2y}dy=\frac{e^{-2x+4kt}}{\sqrt{4\pi kt}}\int_{-\infty}^\infty e^{-\frac{(x-4kt-y)^2}{4kt}}dy$$Now this looks like the problem that was linked above and we can use the same procedure. Take the variable change to be $p=\frac{x-4kt-y}{\sqrt{4kt}}$. Choosing the new limits of integration is as simple as plugging in the old limits of integration into $p$ and we find that $y\to\infty\implies p\to-\infty$ and $y\to-\infty\implies p\to\infty$. Applying the change of variables yields$$u(x,y)=-\frac{e^{-2x+4kt}}{\sqrt{\pi}}\int_\infty^{-\infty}e^{-p^2}dp=\frac{e^{-2x+4kt}}{\sqrt{\pi}}\int_{-\infty}^\infty e^{-p^2}dp$$Luckily for us, we do not need to write this in terms of the error function as this integral is known to be equal to $\sqrt{\pi}$. We now find a simple representation of the solution$$\boxed{u(x,t)=e^{-2x+4kt}}$$Plugging this into the original PDE confirms the solution. May the Fourth be with you. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2767247', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/-1/']} | jdg_74401 |
stackexchange | llm_judgeable_groundtruth_similarity | 6978242 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am using Google Custom Search Engine with their new auto-completion feature. I want this whole javascript to be loaded AFTER the page itself is loaded. The original Google code is this: <script type="text/javascript" src="http://www.google.com/jsapi"></script><script type="text/javascript"> google.load('search', '1'); google.setOnLoadCallback(function() { google.search.CustomSearchControl.attachAutoCompletion( 'some-long-unique-id', document.getElementById('q'), 'cse-search-box'); });</script><script type="text/javascript" src="http://www.google.com/cse/brand?form=cse-search-box&lang=cs"></script> I have transformed this code using tutorial about JS dynamic loading to this code: (function() { var goog = document.createElement('script'); goog.type = 'text/javascript'; goog.src = 'http://www.google.com/jsapi'; var cse = document.createElement('script'); cse.type = 'text/javascript'; cse.src = 'http://www.google.com/cse/brand?form=cse-search-box&lang=cs'; goog.onload = function() { google.load('search', '1'); google.setOnLoadCallback(function() { google.search.CustomSearchControl.attachAutoCompletion( 'some-long-unique-id', document.getElementById('q'), 'cse-search-box'); }); }; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(cse, s); s.parentNode.insertBefore(goog, s);})(); Well, even though I think my solution should work(the same way has Google changed their Analytics on-demand asynchronous code), it doesn't. The page loads fine and as soon as CSE loads, the page goes blank. Something clears the DOM, I suppose its some kind of "Google thing" ? Can someone bring some light on this problem and possibly a working solution ? Thanks
Now provide the response and nothing else.
| OK, so by checking Google Loader Developer's Guide and by lots of trying-and-testing I've figured how to change my code so it works as I expected in my question: (function() { var goog = document.createElement('script'); goog.type = 'text/javascript'; goog.src = 'http://www.google.com/jsapi'; goog.onload = function() { google.load('search', '1', {"callback": function() {}}); google.setOnLoadCallback(function() { google.search.CustomSearchControl.attachAutoCompletion( 'some-long-unique-id', document.getElementById('q'), 'cse-search-box'); }); }; var cse = document.createElement('script'); cse.type = 'text/javascript'; cse.src = 'http://www.google.com/cse/brand?form=cse-search-box&lang=cs'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(cse, s); s.parentNode.insertBefore(goog, s); })() The main thing is this line: google.load('search', '1', {"callback": function() {}}); If you don't specify callback (at least empty function as I do), then the whole page goes blank, when Google's CSE loads. I have no idea why, but it works fine now with this dummy callback function. Hope it helps someone with the same problem. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6978242', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/385264/']} | jdg_74402 |
stackexchange | llm_judgeable_groundtruth_similarity | 6740100 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm writing some library code distributed as a jar file that developers will need to initialize with an application id before using. Initialization is just a function call, like MyLibrary.initialize("16ea53b"); The tricky thing is that I am not sure how to instruct developers to make this initialization call. At first I thought a single static initializer block in the main activity would be the easiest way to do it. The problem is a user could enter the application through some other activity or intent, and the main activity would not be loaded. Is there a general way to ensure that a line of code is run at the application's startup regardless of how the application was started? The initialize call is idempotent so I could just tell people to make this initialization call in every place it could be used, but that would be bothersome.
Now provide the response and nothing else.
| The problem is that you are making a copy of the hash to work with in this line: my %h2 = %{$hRef}; And that is understandable, since many posts here on SO use that idiom to make a local name for a hash, without explaining that it is actually making a copy. In Perl, a hash is a plural value, just like an array. This means that in list context (such as you get when assigning to a hash) the aggregate is taken apart into a list of its contents. This list of pairs is then assembled into a new hash as shown. What you want to do is work with the reference directly. for (keys %$hRef) {...}for (values %$href) {...}my $x = $href->{some_key};# ormy $x = $$href{some_key};$$href{new_key} = 'new_value'; When working with a normal hash, you have the sigil which is either a % when talking about the entire hash, a $ when talking about a single element, and @ when talking about a slice. Each of these sigils is then followed by an identifier. %hash # whole hash $hash{key} # element @hash{qw(a b)} # slice To work with a reference named $href simply replace the string hash in the above code with $href . In other words, $href is the complete name of the identifier: %$href # whole hash$$href{key} # element@$href{qw(a b)} # slice Each of these could be written in a more verbose form as: %{$href}${$href}{key}@{$href}{qw(a b)} Which is again a substitution of the string '$href' for 'hash' as the name of the identifier. %{hash}${hash}{key}@{hash}{qw(a b)} You can also use a dereferencing arrow when working with an element: $hash->{key} # exactly the same as $$hash{key} But I prefer the doubled sigil syntax since it is similar to the whole aggregate and slice syntax, as well as the normal non-reference syntax. So to sum up, any time you write something like this: my @array = @$array_ref;my %hash = %$hash_ref; You will be making a copy of the first level of each aggregate. When using the dereferencing syntax directly, you will be working on the actual values, and not a copy. If you want a REAL local name for a hash, but want to work on the same hash, you can use the local keyword to create an alias. sub some_sub { my $hash_ref = shift; our %hash; # declare a lexical name for the global %{__PACKAGE__::hash} local *hash = \%$hash_ref; # install the hash ref into the glob # the `\%` bit ensures we have a hash ref # use %hash here, all changes will be made to $hash_ref } # local unwinds here, restoring the global to its previous value if any That is the pure Perl way of aliasing. If you want to use a my variable to hold the alias, you can use the module Data::Alias | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/6740100', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2652/']} | jdg_74403 |
stackexchange | llm_judgeable_groundtruth_similarity | 6442123 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
This has got to be such a simple question but I just can't get the answer. I have an XmlNode and all I want to do is output this node, as a string, with indentations (tabs or spaces) intact to provide better readability. So far I tried XmlWriter, XmlTextWriter, XmlDocument, XmlReader. I tried the PreserveWhitespace in XmlDocument but I couldn't get the XmlDocument to output my node. I tried the Formatting = Formatting.Indented property in XmlTextWriter but I couldn't figure out how to output the contents to string. To output the XmlNode as string WITHOUT indentation is easy. I just do XmlNode.OuterXml. How do I get the indentations in there? I want to do this without looping through the XmlNode and using brute force to add whitespace, because I think there should be a simpler way. Thanks. Edit: For future readers, here is the answer: var xmlNode = is some object of type XmlNode using (var sw = new StringWriter()) { using (var xw = new XmlTextWriter(sw)) { xw.Formatting = Formatting.Indented; xw.Indentation = 2; //default is 1. I used 2 to make the indents larger. xmlNode.WriteTo(xw); } return sw.ToString(); //The node, as a string, with indents! } The reason I needed to do this was output the node's xml with syntax highlighting. I used AvalonEdit to highlight the xml, outputted the highlighted text to html, then converted the html to a FlowDocument which could be displayed in a RichTextBox.
Now provide the response and nothing else.
| You were on the right path with the XMLTextWriter , you simply need to use a StringWriter as the base stream. Here are a few good answers on how this is accomplished. Pay particular attention to the second answer, if your encoding needs to be UTF-8. Edit: If you need to do this in multiple places, it is trivial to write an extension method to overload a ToString() on XmlNode : public static class MyExtensions{ public static string ToString(this System.Xml.XmlNode node, int indentation) { using (var sw = new System.IO.StringWriter()) { using (var xw = new System.Xml.XmlTextWriter(sw)) { xw.Formatting = System.Xml.Formatting.Indented; xw.Indentation = indentation; node.WriteContentTo(xw); } return sw.ToString(); } }} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6442123', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/776805/']} | jdg_74404 |
stackexchange | llm_judgeable_groundtruth_similarity | 37456494 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
While reading Keywords That Aren't (or, Comments by Another Name) by Herb Sutter I came across these lines: That's right, some keywords are semantically equivalent to whitespace, a glorified comment. And We've seen why the C++ language treats keywords as reserved words, and we've seen two keywords —auto and register — that make no semantic difference whatsoever to a C++ program. Don't use them; they're just whitespace anyway, and there are faster ways to type whitespace. If the keywords like auto (maybe not in C++11) and register are of no value, then why were they created and used? If it doesn't make any difference to include the register before a variable #include<stdio.h>int main(){ register int a = 15; printf("%d\n%d\n",&a,a); return 0;} Why does the above program give an error? test_register.c: In function ‘main’: test_register.c:4:2: error: address of register variable ‘a’ requested printf("%d\n%d\n",&a,a); The following program works in C++. #include<iostream>int main(){ register int a = 15; std::cout<<&a<<'\n'<<a; return 0;}
Now provide the response and nothing else.
| register In C, the register storage class was used as a hint to the compiler, to express that a variable should be preferentially stored in a register . Note that the hint to store a register variable in an actual register may or may not be honored, but in either case the relevant restrictions still apply. See C11, 6.7.1p6 (emphasis mine): A declaration of an identifier for an object with storage-class specifier register suggests that access to the object be as fast as possible. The extent to which such suggestions are effective is implementation-defined. [footnote 121] [footnote 121] The implementation may treat any register declaration simply as an auto declaration. However, whether or not addressable storage is actually used, the address of any part of an object declared with storage-class specifier register cannot be computed , either explicitly (by use of the unary & operator as discussed in 6.5.3.2) or implicitly (by converting an array name to a pointer as discussed in 6.3.2.1). Thus, the only operators that can be applied to an array declared with storage-class specifier register are sizeof and _Alignof . In C++ it is simply an unused reserved keyword, but it's reasonable to assume that it was kept for syntactical compatibility with C code. auto In C, the auto storage class defines a variable of automatic storage, but it's not usually used since function-local variables are auto by default . Similarly, it's reasonable to assume that it was initially carried over to C++ for syntactical compatibility only, although later it got its own meaning ( type inference ). | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/37456494', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3719089/']} | jdg_74405 |
stackexchange | llm_judgeable_groundtruth_similarity | 56153817 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have following dataframe: import pandas as pddf = pd.DataFrame(data=[[1,2,3,4,5,6,1,2,3], [7,8,9,10,11,12,7,8,9], [13,14,15,16,17,18,4,5,6]], index=pd.date_range('2004-01-01', '2004-01-03'))df.columns = pd.MultiIndex.from_product([['x', 'y', 'z'], list('abc')])df x y z a b c a b c a b c2004-01-01 1 2 3 4 5 6 1 2 32004-01-02 7 8 9 10 11 12 7 8 92004-01-03 13 14 15 16 17 18 4 5 6 I want to sum groups of columns at the first level (level=0) and get something like this: (x+z) y a b c a b c2004-01-01 2 4 6 4 5 62004-01-02 14 16 18 10 11 122004-01-03 17 19 21 16 17 18 What I have tried: mapping = {'x': 'r1', 'y': 'r2', 'z': 'r1'}df.groupby(mapping, axis=1, level=0).apply(lambda g: g.sum(axis=1, level=1)) But I got an error. Thanks for any advise.
Now provide the response and nothing else.
| Okay, so the issue seems to be that you only refer to the function event.preventDefault rather than invoking it.Adding some brackets at the end to invoke it should do the trick: event.preventDefault() . I however found this issue while looking for some simple code to use, so I will also leave the hook I made for this if others in the same situation: import { useRef, useEffect } from "react";export function useHorizontalScroll() { const elRef = useRef(); useEffect(() => { const el = elRef.current; if (el) { const onWheel = e => { if (e.deltaY == 0) return; e.preventDefault(); el.scrollTo({ left: el.scrollLeft + e.deltaY, behavior: "smooth" }); }; el.addEventListener("wheel", onWheel); return () => el.removeEventListener("wheel", onWheel); } }, []); return elRef;} Usage: import React from "react";import { useSideScroll } from "./useSideScroll";export const SideScrollTest = () => { const scrollRef = useHorizontalScroll(); return ( <div ref={scrollRef} style={{ width: 300, overflow: "auto" }}> <div style={{ whiteSpace: "nowrap" }}> I will definitely overflow due to the small width of my parent container </div> </div> );}; Note:The scroll behavior "smooth" seems to be giving some trouble when trying to do continuous scrolling. This behavior can be omitted to have proper continuous scrolling, but it will look jerky. As far as I know, there is no easy solution for this. I have however created a rather involved solution in my own project, so thought some people may appreciate that also: https://gist.github.com/TarVK/4cc89772e606e57f268d479605d7aded | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/56153817', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9446730/']} | jdg_74406 |
stackexchange | llm_judgeable_groundtruth_similarity | 36987162 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
public class A{ int aa; aa = 10; //compile error } is the compile error because class fields need to be compile time constant? is this design to improve performance?
Now provide the response and nothing else.
| There are 3 points must be done. This is a full solution for me. AndroidManifest.xml need config both orientation and screenSize in android:configChanges for target activity <?xml version="1.0" encoding="utf-8"?><manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.skatetube.skatetube"> <application <activity android:name=".WebViewActivity" android:configChanges="orientation|screenSize"> </activity> </application></manifest> Need to override onSaveInstanceState and onRestoreInstanceState of Activity to make state can be restore when rotated @Overrideprotected void onSaveInstanceState(Bundle outState){ super.onSaveInstanceState(outState); mWebView.saveState(outState);}@Overrideprotected void onRestoreInstanceState(Bundle savedInstanceState){ super.onRestoreInstanceState(savedInstanceState); mWebView.restoreState(savedInstanceState);} Not loading URL again when screen rotate, use savedInstanceState to know current status in onCreate() of target activity @Overrideprotected void onCreate(@Nullable Bundle savedInstanceState) { mWebView = (WebView) findViewById(R.id.webView_h5); if (savedInstanceState == null) { mWebView.loadUrl(url); }} Miss any one of those three, my activity still reload. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/36987162', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4001653/']} | jdg_74407 |
stackexchange | llm_judgeable_groundtruth_similarity | 99445 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I recently asked a question on Stack Overflow to find out why isset() was faster than strlen() in PHP . This raised questions around the importance of readable code and whether performance improvements of micro-seconds in code were worth even considering. My father is a retired programmer, and I showed him the responses. He was absolutely certain that if a coder does not consider performance in their code even at the micro level, they are not good programmers. I'm not so sure - perhaps the increase in computing power means we no longer have to consider these kind of micro-performance improvements? Perhaps this kind of considering is up to the people who write the actual language code? (of PHP in the above case). The environmental factors could be important - the Internet consumes 10% of the world's energy. I wonder how wasteful a few micro-seconds of code is when replicated trillions of times on millions of websites? I'd like to know answers preferably based on facts about programming. Is micro-optimisation important when coding? My personal summary of 25 answers, thanks to all. Sometimes we need to really worry about micro-optimisations, but only in very rare circumstances. Reliability and readability are far more important in the majority of cases. However, considering micro-optimisation from time to time doesn't hurt. A basic understanding can help us not to make obvious bad choices when coding such as if (expensiveFunction() || counter < X) Should be if (counter < X || expensiveFunction()) ( Example from @zidarsk8 ) This could be an inexpensive function and therefore changing the code would be micro-optimisation. But, with a basic understanding, you would not have to, because you would write it correctly in the first place.
Now provide the response and nothing else.
| I both agree and disagree with your father. Performance should be thought about early, but micro-optimization should only be thought about early if you actually know that a high percent of time will be spent in small CPU-bound sections of code. The problem with micro-optimization is that it is usually done without having any concept of how programs actually spend more time than necessary. This knowledge comes from experience doing performance tuning, as in this example , in which a seemingly straightforward program, with no obvious inefficiencies, is taken through a series of diagnosis and speedup steps, until it is 43 times faster than at the beginning. What it shows is that you cannot really guess or intuit where the problems will be. If you perform diagnosis, which in my case is random-pausing , lines of code responsible for a significant fraction of time are preferentially exposed. If you look at those, you may find substitute code, and thereby reduce overall time by roughly that fraction. Other things you didn't fix still take as much time as they did before, but since the overall time has been reduced, those things now take a larger fraction, so if you do it all again, that fraction can also be eliminated. If you keep doing this over multiple iterations, that's how you can get massive speedups, without ever necessarily having done any micro-optimization . After that kind of experience, when you approach new programming problems, you come to recognize the design approaches that initially lead to such inefficiencies. In my experience, it comes from over-design of data structure, non-normalized data structure, massive reliance on notifications, that sort of thing. | {} | {'log_upvote_score': 8, 'links': ['https://softwareengineering.stackexchange.com/questions/99445', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/33747/']} | jdg_74408 |
stackexchange | llm_judgeable_groundtruth_similarity | 1697484 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to make a button that starts my php script after I click it. So do I just make 2 separate files and have action post to the php file and then let it start? or is there a better way to do this? Possibly in one document? Update: Well, I basically made a script that would do a series of a loops until it's finished. So usually when I visit the page it automatically starts, so I'm making a button to start it only when I need it. More info: Answer to one of the questions, "starting the script" as in it would only execute the script. More info: I really don't need to pass any data from the submit form to the php script. I just want my script to run when I hit a button. I just want to know what's the best way to do this.
Now provide the response and nothing else.
| Having 2 files like you suggested would be the easiest solution. For instance: 2 files solution: index.html (.. your html ..)<form action="script.php" method="get"> <input type="submit" value="Run me now!"></form>(...) script.php <?php echo "Hello world!"; // Your code here?> Single file solution: index.php <?php if (!empty($_GET['act'])) { echo "Hello world!"; //Your code here } else {?>(.. your html ..)<form action="index.php" method="get"> <input type="hidden" name="act" value="run"> <input type="submit" value="Run me now!"></form><?php }?> | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1697484', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/170365/']} | jdg_74409 |
stackexchange | llm_judgeable_groundtruth_similarity | 27932983 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
In my script.js: pic.src = "/static/photos/1.jpg"; // This workspic2.src = "{% static 'photos/1.jpg' %}" // Does not work Why in the world this happens? Since in my home.html, the {% static 'path' %} works: {% load staticfiles %}<script src="{% static 'script.js' %}"></script> // This works And is it {% load staticfiles %} or {% load static %} ? Both work for me, script.js is loaded.
Now provide the response and nothing else.
| Since you are using django's template language you can ONLY do this within your template between <script> tags. In other words if you wished to use your pic2.src javascript variable in an external script then you would need to declare it between <script> tags like so <script> var pic2.src = "{% static "photos/1.jpg" %}"</script> And then you could access it in your external scripts that you might load like this: <script type="text/javascript" src="{% static "js/my_external_script.js" %}"></script> Regarding your question concerning load static and load staticfiles there is little distinction. Both act as a joiner for the STATIC_URL in your settings.py and the actual path to the file itself so both should work for your case. See here and here for more info. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/27932983', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4451066/']} | jdg_74410 |
stackexchange | llm_judgeable_groundtruth_similarity | 3278714 |
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
Let $(G,*)$ be a group. And let $ x$ be a element of odd order of $G$ , then prove or disprove that , there is a element $y$ in $G$ such that , $y^2 = x$ Please provide some hint, i am not able to show any contradicting examples nor able to prove it.
Now provide the response and nothing else.
| Suppose $x$ has order $2n+1$ . Then $x^{n+1}$ can serve as $y$ , since $y^2=x^{2n+2}=x^{2n+1}x=x$ . Therefore the statement is true. | {} | {'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3278714', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/597464/']} | jdg_74411 |
stackexchange | llm_judgeable_groundtruth_similarity | 311365 |
Below is a question asked on the forum softwareengineering.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am writing a physics simulation program in C++. I am a novice in OOP and C++. In my program, several objects need to be initialized based on data from an input file. For example, an imaginary input file: # Wind turbine input file:number_of_blades = 2hub_height = 120# Airfoil data:airfoil1 = { chord = 2, shape = naca0012}airfoil2 = { chord = 3, shape = naca0016} For this example, lets say I have a Turbine class and an Airfoil class. Airfoil objects need to know their chord and their shape, and the turbine object needs to know the height and number of blades. Should I do this so that each object can build itself from an input file? e.g.: class Turbine { public: Turbine(File input_file); // reads input file to get the number of blades private: int num_blades_; double height_;}; or should it be done with a free function: Turbine create_turbine_from_file(File input_file){ Turbine t; t.set_num_blades(input_file.parse_num_blades()); t.set_height(input_file.parse_height()); return t;};class Turbine { public: Turbine(); void set_height(); void set_num_blades(); private: int num_blades_; double height_;}; What are the advantages disadvantages of each method? Is there a better way?
Now provide the response and nothing else.
| First of all, congratulations for taking programming a step further and wondering about how to do it better (and for asking a good question). It's a great attitude and absolutely necessary to take your programs a step further. Kudos! What you're dealing with here is a problem related to your program's architecture (or design, depending on who you ask). It's not so much about what it does, but how it does it (i.e. the structure of your program instead of its functionality). It's very important to be clear about this: you could totally make those classes take File objects as input, and your program could still work. If you went a step further and added all the exception handling code and took care of edge cases related to files and I/O (which should be done somewhere) in those classes (...but not there), and they became a hodgepodge of I/O and domain logic (domain logic means logic related to the actual problem you're trying to solve), your program could "work". The goal, if you plan on making this more than a simple, one-off thing, should be that it work properly , meaning that you can change parts of it without affecting others, fix bugs as they surface and hopefully extend it without too much difficulty when and if you find new features and use cases you want to add. OK, now, the answer. First: yes, using Files as method parameters in the Turbine class violates the SRP. Your Turbine and Airfoil classes should not know anything about files. And, yes, there are better ways to do it. I'll talk you through one way I would do it first and then go into more detail about why it's better later. Remember, this is only an example (not really compilable code, but a sort of pseudocode) and one possible way to do it. // TurbineData struct (to hold the data for turbines)struct TurbineData{ int number_of_blades; double hub_height;}// TurbineRepository (abstract) classclass TurbineRepository{ // Defines an interface for Turbine repositories, which return Vectors of TurbineData structures. public: virtual std::Vector<TurbineData> getAll();}// TurbineFileRepository classclass TurbineFileRepository: public TurbineRepository{ // Implements the TurbineRepository "interface". public: TurbineRepository(File inFile); std::Vector<TurbineData> getAll(); private: File file;}TurbineFileRepository::TurbineFileRepository(File inFile){ // Process the File and handle everything you need to read from it // At some point, do something like: // file = inFile}std::Vector<TurbineData> TurbineFileRepository::getAll(){ // Get the data from the file here and return it as a Vector}// TurbineFactory classclass TurbineFactory{ public: TurbineFactory(TurbineRepository *repo); std::Vector<Turbine> createTurbines(); private: TurbineRepository *repository;}TurbineFactory::TurbineFactory(TurbineRepository *repo){ // Create the factory here and eventually do something like: // repository = repo;}TurbineFactory::createTurbines(){ // Create a new Turbine for each of the structs yielded by the repository // Do something like... std::Vector<Turbine> results; for (auto const &data : repo->getAll()) { results.push_back(Turbine(data.number_of_blades, data.hub_height)); } return results;}// And finally, you would use it like:int main(){ TurbineFileRepository repo = TurbineFileRepository(/* your file here */); TurbineFactory factory = TurbineFactory(&repo); std::Vector<Turbines> my_turbines = factory.createTurbines(); // Do stuff with your newly created Turbines} OK, so, the main idea here is to isolate, or hide, the different parts of the program from each other. I especially want to isolate the core part of the program, where the domain logic is (the Turbine class, which actually models and solves the problem), from other details, such as storage. First, I define a TurbineData structure to hold the data for Turbine s that comes from the outside world. Then, I declare a TurbineRepository abstract class (meaning a class that cannot be instantiated, only used as parent for inheritance) with a virtual method, that basically describes the behavior of "providing TurbineData structures from the outside world". This abstract class can also be called an interface (a description of behavior). The TurbineFileRepository class implements that method (and thus provides that behavior) for File s. Lastly, the TurbineFactory uses a TurbineRepository to get those TurbineData structures and create Turbine s: TurbineFactory -> TurbineRepo -> Turbine // with TurbineData as a means of passing data. Why am I doing it this way? Why should you separate file I/O from the inner workings of your program? Because the two main goals of the design or architecture of your programs are to reduce complexity and to isolate change. Reducing complexity means making things as simple as possible (but not simpler) so that you can reason about the individual parts properly and separately: when you're thinking about Turbine s, you shouldn't have think about the format in which the files that contain the turbine data are written, or whether the File you're reading is there or not. You should be thinking about Turbine s, period. Isolating change means that changes should affect the least possible amount of places in the code, so that the chances that bugs happen (and the possible areas where they can happen after you change the code) are reduced to the absolute minimum. Also, things that change often, or are likely to change in the future, should be separate from the things that aren't. In your case, for example, if the format in which Turbine data is stored in the files changes, there should be no reason for the Turbine class to change, only classes like TurbineFileRepository . The only reason Turbine should change is if you added more sophisticated modeling to it, or the underlying physics changed (which is considerably less likely than the file format changing), or something similar. The detail of where and how the data is stored should be handled separately by classes, such as TurbineFileRepository , that will, consequently, have no idea about how Turbine s work, or even why the data they provide is needed. These classes totally should implement I/O exception handling, and all the kind of boring and incredibly important stuff that happens when your program talks to the outside world, but they should not go beyond that. The function of TurbineRepository is to hide from TurbineFactory all those details and only provide it with a vector of data. It's also what TurbineFileRepository implements so that no details about it need be known to whoever wants to use TurbineData structures. As a nice possible feature change, imagine you wanted to store turbine and airfoil data in a MySQL database. For that to work, all you'd need to do is implement a TurbineDatabaseRepository and plug it in. Nothing more. Cool, huh? Best of luck with your programming! | {} | {'log_upvote_score': 4, 'links': ['https://softwareengineering.stackexchange.com/questions/311365', 'https://softwareengineering.stackexchange.com', 'https://softwareengineering.stackexchange.com/users/218441/']} | jdg_74412 |
stackexchange | llm_judgeable_groundtruth_similarity | 51263768 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
gem install is failing in MacOs Mojave. Anything that can help me solve this? My ruby version is ruby 2.3.7p456 . ➜ sudo gem install json -v '1.8.3'current directory: /Library/Ruby/Gems/2.3.0/gems/json-1.8.3/ext/json/ext/generatormake "DESTDIR="compiling generator.cIn file included from generator.c:1:In file included from ./../fbuffer/fbuffer.h:5:In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.13.sdk/System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/include/ruby-2.3.0/ruby.h:33:/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.13.sdk/System/Library/Frameworks/Ruby.framework/Versions/2.3/usr/include/ruby-2.3.0/ruby/ruby.h:24:10: fatal error: 'ruby/config.h' file not found#include "ruby/config.h" ^~~~~~~~~~~~~~~1 error generated.make: *** [generator.o] Error 1make failed, exit code 2
Now provide the response and nothing else.
| Update 15/03/2021 According to Apple , Ruby won't be included anymore in future versions of the system, so using a Ruby version manager will be the only way to go once this will actually happen. Original answer After hours and hours of trial and errors I wanted to go deeper into the reasons of this error and I finally found this that I think is the clean way to go around this issue: Catalina (NdMe: I think this started earlier, on Mojave) no longer supports including macOS headers in command linetools, they have to be explicitly used through Xcode SDK. You can doso by running gem through the xcrun tool like: xcrun gem install <your_gem> Quoted from a Github issue comment . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/51263768', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2487058/']} | jdg_74413 |
stackexchange | llm_judgeable_groundtruth_similarity | 36795930 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an excel spreadsheet with some rows of data on sheet1 like so:Sheet1: ID Subject Date 1 Tennis 23/04/2016 2 Football 19/05/2016 3 Golf 22/04/2016 Then on sheet2 i want to create a list of activities happening within 1 week: Sheet2: Activities 1 week awayGolfTennis So because both golf and tennis are happening 22nd and 23rd of April and are both within 1 week away from today, these should be displayed in date descending order. To do this, i know i need to create an index match and set parameters for my dates to see if these are within 7 days. However i cannot get the below to work: =IFERROR(INDEX(Data!$G:$G,MIN(IF(Data!$E$2:$E$5000>=TODAY()-7),ROW($2:$5000)),1,"") Please can someone show me where i am going wrong? Thanks in advance
Now provide the response and nothing else.
| I have an application with the same requirement and to solve it I basically restricted Spring Security to a given ant match patter (using antMatcher ) as follows: http .antMatcher("/api/**") .authorizeRequests() // .anyRequest().authenticated() // .and() .addFilterBefore(authenticationTokenFilterBean(), UsernamePasswordAuthenticationFilter.class); You can read it as follows: for http only invoke these configurations on requests matching the ant pattern /api/** authorizing any request to authenticated users and add filter authenticationTokenFilterBean() before UsernamePasswordAuthenticationFilter . For all others requests this configuration has no effect. | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/36795930', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6128593/']} | jdg_74414 |
stackexchange | llm_judgeable_groundtruth_similarity | 33536 |
Below is a question asked on the forum emacs.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I've use js2-mode but it don't indent html in javascript properly. I've found this repo https://github.com/jsx/jsx-mode.el but it's not for react, it's for some other jsx. What do you use for editing react application with jsx files?
Now provide the response and nothing else.
| Solution 1: Step 1, Use https://github.com/felipeochoa/rjsx-mode Step 2, Install Emacs 25+, see https://github.com/mooz/js2-mode/issues/291 Step 3, Patch rjsx-mode with below code (defadvice js-jsx-indent-line (after js-jsx-indent-line-after-hack activate) "Workaround sgml-mode and follow airbnb component style." (save-excursion (beginning-of-line) (if (looking-at-p "^ +\/?> *$") (delete-char sgml-basic-offset)))) Please note there is still some indent issue if you use arrow function in component attribute. But this solution works fine in most cases. The patch is still useful for current stable version rjsx-mode v0.4.0 . The bug was fixed on 2018-8-19 in unstable version , see https://github.com/felipeochoa/rjsx-mode/pull/75 for details. The reason I stick to rjsx-mode is that it inherits from js2-mode so I can use imenu functions from js2-mode. It's very useful when writing es6 javascript. Please note if you use js2-jsx-mode instead of rjsx-mode , you still need my patch. Solution 2: Use web-mode. I don't use web-mode but its recent release note claims jsx indentation can be handled properly. If you use web-mode, imenu from js2-mode is not available any more. | {} | {'log_upvote_score': 5, 'links': ['https://emacs.stackexchange.com/questions/33536', 'https://emacs.stackexchange.com', 'https://emacs.stackexchange.com/users/10438/']} | jdg_74415 |
stackexchange | llm_judgeable_groundtruth_similarity | 16786440 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm fairly new to Scala so please be gentle. In the app I'm currently building, I'm using Akka actors and I want to write some unit tests. I came across this official documentation for writing unit tests for Akka actors but I couldn't understand exactly how it should work. In particular, val actorRef = TestActorRef(new MyActor)// hypothetical message stimulating a '42' answerval future = actorRef ? Say42val Success(result: Int) = future.value.getresult must be(42) When I try that, I get not found: value Success , which is not surprising. I then found this example of how to test Scala actors val actorRef = TestActorRef[TickTock]implicit val timeout = Timeout(5 seconds)val future = (actorRef ? new Tick("msg")).mapTo[String]val result = Await.result(future, timeout.duration)Assert.assertEquals("processed the tick message", result) , which admittedly is possibly old, but it is easy to understand and closer to what I normally use when I want to use Futures, and most importantly works. It does require me to declare a few implicits like the ActorSystem, timeout and such, which doesn't seem to be the case with the official way... If possible, I'd like to use the method proposed by the official documentation, so I would appreciate it if someone could help me understand how it works (in particular the Success bit) and how to use it.
Now provide the response and nothing else.
| The answer to your question might be too long, because it is impossible to know how much Scala you actually know. I will try to make my answer as short as possible, but do not hesitate to ask for clarification at any point. I also apologize on behalf of the whole stackoverflow community for making you feel the need to apologize due to an apparent lack of skill before asking a question. In Scala 2.10 a concept of Try was introduced. It is very similar to Option . Option is a concept of handling null s. A value of type Option can take two forms: Some(value) or None . When you have an Option al value you can pattern match on it to see if it is a Some or a None and then act accordingly. Pattern matching occurs in many places in Scala and one of them is during the initialization of val s. Here are few examples: val x = 10 // pattern 'x' on the LHS matches any value on the RHS so 'x' is initialized with 10val Some(x) = Some(10) // pattern 'Some(x)' on the LHS matches any value of type 'Some' and binds it's value to x, so 'x' is yet again initialized with 10 Try is a concept of handling exceptions. A value of type Try can take two forms: Success(result) or Failure(throwable) . When you have a value of type Try you can pattern match on it to see if it is a Success or a Failure . This is what happens in your code (pattern matching on Success ). In contrast to Option the two forms of Try are not in scope by default, which causes the compilation error. This will fix it: import scala.util.{Try, Success, Failure} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16786440', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1814775/']} | jdg_74416 |
stackexchange | llm_judgeable_groundtruth_similarity | 4435906 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a form with some text areas that allow a scroll bar when the text exceeds the text box. The user would like to be able to print the screen, and this text is not visible. How do I make all of the text visible for just printing? Am I better of making a print to pdf link or something?
Now provide the response and nothing else.
| You cannot solve this problem with CSS alone. Why Pure-CSS Solutions are Insufficient (with demo) Let me convince you the answers involving print stylesheets and overflow: visible are insufficient. Open this page and look at the source. Just what they suggested, right? Now print preview it (in, say, Chrome 13 on OS X, like me). Note that you can only see a line or two of the note when you attempt to print! Here’s the URL for my test case again: https://alanhogan.github.io/web-experiments/print_textarea.html Solutions: A JavaScript link that opens a new window and writes the contents of the textarea to it for printing. Or: When the textarea is updated, copy its contents to another element that that his hidden for screen but displayed when printed. (If your textarea is read-only, then a server-side solution is also workable.) Note that textarea s treat whitespace differently than HTML does by default, so you should consider applying the CSS white-space: pre-wrap; in the new window you open or to your helper div , respectively. IE7 and older do not understand pre-wrap however, so if that is an issue, either accept it or use a workaround for them. or make the popup window actually plain text, literally served with a media type text/plain (which probably requires a server-side component). The “Print Helper” Solution (with code + demo) I have created a demo of one JavaScript technique . The core concept is copying the textarea contents to another print helper. Code follows. HTML: <textarea name="textarea" wrap="wrap" id="the_textarea"></textarea><div id="print_helper"></div> CSS (all / non-print): /* Styles for all media */#print_helper { display: none;} CSS (print): /* Styles for print (include this after the above) */#print_helper { display: block; overflow: visible; font-family: Menlo, "Deja Vu Sans Mono", "Bitstream Vera Sans Mono", Monaco, monospace; white-space: pre; white-space: pre-wrap;}#the_textarea { display: none;} Javascript (with jQuery): <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js"></script><script type="text/javascript">jQuery(function($){ function copy_to_print_helper(){ $('#print_helper').text($('#the_textarea').val()); } $('#the_textarea').bind('keydown keyup keypress cut copy past blur change', function(){ copy_to_print_helper(); // consider debouncing this to avoid slowdowns! }); copy_to_print_helper(); // on initial page load});</script> Again, the successful JavaScript-based demo is at https://alanhogan.github.io/web-experiments/print_textarea_js.html . | {} | {'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/4435906', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/531746/']} | jdg_74417 |
stackexchange | llm_judgeable_groundtruth_similarity | 16830134 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I am reading Herb Sutter's More Exceptional C++ and item 37 on forward declarations says: Never #include a header when a forward declaration will suffice. Prefer to #include only <iosfwd> when the complete definition of a stream is not needed. Also I heard plenty of advice on including only the headers needed by the compilation unit to reduce dependencies. I understand perfectly well why this should apply to project headers, but I do not quite understand why is it bad to include unnecessary standard headers. For example I do something like this: //standard_library.h#ifndef STANDARD_LIBRARY#define STANDARD_LIBRARY#include <iostream>#include <chrono>#include <thread>...// Everything I need in the project#endif and include this single header everywhere, where I need something from std The problems that I can imagine are: Pollution of namespace by C library functions that do not need to be in the std namespace. Slower compilation time But I haven't had significant problems with 1. sofar. Almost everything is in the std namespace. Also I do not fully understand why 2. is necessarily a significant problem. The standard headers rarely change. Also as far as I know the compiler can precompile them. When it comes to templates, they are instantiated(compiled) only when I need them. There are also benefits: Less typing Less reading Less figuring out which headers I need and in which header a certain function is I am a beginner programer without experience on big projects and I sincerely want to figure this out so please have mercy upon me.
Now provide the response and nothing else.
| Besides namespace pollution compilation time (although reducable by precompiled headers, it will hurt those compiling a large project once because they actually want to use it, and not develop - also you want to think about rebuilds which are necessary once in a while) you named "Less figuring out which headers I need and in which header a certain function is" as a benefit. I agree so far as this can be true for well designed libraries and headers. In practice however I experienced (at least with MFC/ATL) some errors which could be solved by figuring out the correct order of includes. On the other hand one day you want to resolve an issue which makes you travel across the included headers - now imagine yourself looking at tons of headerfiles actually having nothing to do with your code file. My conclusion is: The time you save by including a bunch of unnecessary headers do not pay off if you have to maintain a large project later on. The more time you invest before starting including any headers, the more time you will safe afterwards - but mostly without actually recognizing it. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16830134', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1097451/']} | jdg_74418 |
stackexchange | llm_judgeable_groundtruth_similarity | 45090 |
Below is a question asked on the forum dsp.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm struggling to get my head round the mathematical proof for the alias frequencies in a sampled sine wave. I understand that sampling a sine wave of frequency $f_0$ every $t_s$ seconds gives you: $$x[n]=\sin(2\pi f_0nt_s)$$ I also understand that, because the sine wave is periodic every $2\pi$, you can add any multiple of $2\pi$ to the angle and get the same values for the sine, i.e., $$\sin(2\pi f_0nt_s)=\sin(2\pi f_0nt_s+2\pi m) \quad\text{(where $m$ is any integer).}$$ The proof I'm looking at then factors out $2\pi$ and $nt_s$ to get: $$\sin\left(2\pi(f_0+\frac{m}{nt_s})nt_s\right)$$ ...but then it says to let $m$ be an integer multiple of $n$ so we can replace the $\frac{m}{n}$ ratio with an integer $k$. I don't understand how $m$ can go from being "any integer" to "an integer multiple of $n$" . If $m$ is any integer and $n$ is an integer then how can the ratio between them be an integer? I know I'm missing something obvious here and I'm searching for that light-bulb moment but it's not happening. Because this is so fundamental to DSP I don't just want to accept the formula and move on without thoroughly understanding it.
Now provide the response and nothing else.
| The reason is that if it is true for any $m$ , it is also true for $m=kn$. I will sketch the proof in another way. Call $f_s = 1/t_s$ sampling frequency where $t_s$ is sampling period, the two signal $x(t) = \sin(2\pi f_0 t)$ and $x_k(t) = \sin(2\pi (f_0 + k f_s) t)$ have the same values at sampling instants (aliasing), i.e. $x[n] = x_k[n]$. Indeed, \begin{align}x[n] &= \sin (2 \pi f_0 n t_s) \\x_k[n] &= \sin \left(2 \pi (f_0 + k f_s) n t_s\right) \\&= \sin (2 \pi f_0 n t_s + 2\pi k f_s n t_s) \\&= \sin (2 \pi f_0 n t_s + 2\pi k n) = \sin (2 \pi f_0 n t_s)\\&=x[n]\end{align} | {} | {'log_upvote_score': 4, 'links': ['https://dsp.stackexchange.com/questions/45090', 'https://dsp.stackexchange.com', 'https://dsp.stackexchange.com/users/31571/']} | jdg_74419 |
stackexchange | llm_judgeable_groundtruth_similarity | 68182628 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I installed Terraform v1.0.1 on linux_amd64 ( Oracle Linux Srv 8.4 64bit ). I’m trying to use a local provider/plugin that I saved in the folder: /root/.terraform.d/plugins # ll /root/.terraform.d/pluginsdrwxr-xr-x. 2 root root 38 Jun 29 15:42 oldversion-rwxr-xr-x. 1 root root 30068808 Jun 29 15:42 terraform-provider-zabbixdrwxr-xr-x. 2 root root 52 Jun 29 15:42 test_plugging This is my vim /root/.terraformrc : provider_installation { filesystem_mirror { path = "/root/.terraform.d/plugins" } direct { exclude = ["registry.terraform.io/*/*"] }} This is my main.tf : terraform { required_version = ">= 0.12.6"}provider "zabbix" { username = local.provider_vars.zabbix.username password = local.provider_vars.zabbix.password url = local.provider_vars.zabbix.endpoint tls_insecure = true} but when I run: terraform init Initializing the backend... Initializing provider plugins... Finding latest version of hashicorp/zabbix... Error: Failed to query available provider packages Could not retrieve the list of available versions for providerhashicorp/zabbix: provider registry.terraform.io/hashicorp/zabbix wasnot found in any of the search locations /root/.terraform.d/plugins How can fix this problem?Thanks for the help Marco
Now provide the response and nothing else.
| Assuming you have a binary ~/.terraform.d/plugins/terraform.local/local/zabbix/1.0.0/linux_amd64/terraform-provider-zabbix_v1.0.0 Configure Terraform as follows terraform { required_providers { zabbix = { source = "terraform.local/local/zabbix" version = "1.0.0" # Other parameters... } }} Which works as follows terraform initInitializing the backend...Initializing provider plugins...- Finding terraform.local/local/zabbix versions matching "1.0.0"...- Installing terraform.local/local/zabbix v1.0.0...- Installed terraform.local/local/zabbix v1.0.0 (unauthenticated)Terraform has created a lock file .terraform.lock.hcl to record the providerselections it made above. Include this file in your version control repositoryso that Terraform can guarantee to make the same selections by default whenyou run "terraform init" in the future.Terraform has been successfully initialized! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/68182628', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11771565/']} | jdg_74420 |
stackexchange | llm_judgeable_groundtruth_similarity | 4446 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
The standard sum of squares as I know it is:$$\sum(X-m)^2$$where $m$ is the mean. I ran into a different one which can be written two ways:$$\sum(X^2) - \frac{(\sum X)^2}{n} = \sum(X^2) - m\sum X$$I believe the latter is called the "correction term for the mean" (e.g. here ). My algebra seems to be inadequate to show these are equivalent, so I was looking for a derivation.
Now provide the response and nothing else.
| Expanding the square we get: $\sum_i(X_i-m)^2 = \sum_i(X_i^2 + m^2 - 2 X_i m)$ Thus, $\sum_i(X_i-m)^2 = \sum_i{X_i^2} + \sum_i{m^2} - 2 \sum_i{X_i m}$ Since $m$ is a constant, we have: $\sum_i(X_i-m)^2 = \sum_i{X_i^2} + n m^2 - 2 m \sum_i{X_i}$ But, $\sum_i{X_i} = n m$. Thus, $\sum_i(X_i-m)^2 = \sum_i{X_i^2} + n m^2 - 2 n m^2$ Which on simplifying gets us: $\sum_i(X_i-m)^2 = \sum_i{X_i^2} - n m^2$ Thus, we get can rewrite the rhs of the above in two ways: $\sum_i{X_i^2} - m (n m) = \sum_i{X_i^2} - m \sum_i{X_i}$ (as $n m = \sum_i{X_i}$) and $\sum_i{X_i^2} - n (m)^2 = \sum_i{X_i^2} - \frac{(\sum_i{X_i})^2}{n}$ (as $m = \frac{\sum_i{X_i}}{n}$) | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/4446', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/1959/']} | jdg_74421 |
stackexchange | llm_judgeable_groundtruth_similarity | 57330900 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've got an Ad component that is dynamically loaded into different pages across an eCommerce application. I'd like to be able to know how many of these components exist in a react page at any given time. I'm wondering if there's a react pattern for determining how many instances of a component exist in a page at any given time. I'd imagine you would do this in the root of the application, and iterate through react.children , counting the Ad components as they appear. Does anyone know best practice for this approach? Thanks :) match({ routes, location, history: browserHistory }, (error, renderProps) => { const requireComponents = (props) => props ? props.components.map((component) => component && component.requireComponent && store.dispatch(component.requireComponent()) ) : []; const render = () => { store.initialRender = true; ReactDOM.hydrate( ( <AnalyticsProvider onEvent={analyticsHandler}> <Provider store={store}> <Router history={browserHistory}>{routes}</Router> </Provider> </AnalyticsProvider> ), document.querySelector(".js-content") ); store.initialRender = false; }; Promise.all(requireComponents(renderProps)) .then(render) .then(() => window.newrelic.addPageAction("FIT")); }); I've tried console.logging renderProps - and there are some routes in that object, and also a getComponent function but I haven't seen the results I"m looking for.
Now provide the response and nothing else.
| To count the number of mounted intances of a given component, you can add the following to your component ( Ad.js in your case). import React, { useEffect } from 'react'let instancesCount = 0const Ad = (props) => { useEffect(() => { instancesCount += 1 console.log({instancesCount}) return () => { instancesCount -= 1 console.log({instancesCount}) } }, []) return ( // Whatever your component does ) This will increment instancesCount each time a instance of Ad is mounted to the DOM, and decrement it each time an instance is unmounted, effectively giving you the exact number of mounted instances at any given time. It will also log that number every time an instance is mounted or unmounted. Requirement: React v > 16.8.0 If you have a lower version of React, or if your Ad component is a class component, use this: let instancesCount = 0class Ad extends React.Component { componentDidMount = () => { instancesCount += 1 console.log({instancesCount}) } componentWillUnmount = () => { instancesCount -= 1 console.log({instancesCount}) } // Whatever your component does} | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/57330900', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1220243/']} | jdg_74422 |
stackexchange | llm_judgeable_groundtruth_similarity | 4788242 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I need to determine the length of string which may contain html-entities. For example "&darr ;" (↓) would return length 6, which is correct, but I want these entities to be counted as only 1 character.
Now provide the response and nothing else.
| <div id="foo">↓</div>alert(document.getElementById("foo").innerHTML.length); // alerts 1 So based on that rationale, create a div, append your mixed up entity ridden string to it, extract the HTML and check the length. var div = document.createElement("div");div.innerHTML = "↓↓↓↓";alert(div.innerHTML.length); // alerts 4 Try it here. You might want to put that in a function for convenience, e.g.: function realLength(str) { // maybe there's a better name? var el = document.createElement("div"); el.innerHTML = str; return el.innerHTML.length; } | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4788242', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/446835/']} | jdg_74423 |
stackexchange | llm_judgeable_groundtruth_similarity | 71387 |
Below is a question asked on the forum stats.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
My understanding is that a power analysis is post hoc if and only if it uses the observed effect size as the target population effect size.
Now provide the response and nothing else.
| In my field I see people doing post-hoc power analyses when the purpose of the paper is to show that some effect that one might have expected to be present (either because of previous literature, common sense, etc) is not, at least according to some significance test. However, in these situations, the researcher is in a bit of a bind -- he or she may have obtained a non-significant result either because the effect really is not present in the population or because the study was not sufficiently powered to detect the effect even if it were present. The purpose of the power analysis, then, is to show that, given even a trivially small effect in the population, the study would have had a high probability of detecting that effect. For a concrete example of this use of post-hoc power analysis, see this linked paper. | {} | {'log_upvote_score': 4, 'links': ['https://stats.stackexchange.com/questions/71387', 'https://stats.stackexchange.com', 'https://stats.stackexchange.com/users/9162/']} | jdg_74424 |
stackexchange | llm_judgeable_groundtruth_similarity | 196025 |
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
If I can do this in my bash shell: $ STRING="A String"$ echo ${STRING^^}A STRING How can I change my command line argument to upper case? I tried: GUARD=${1^^} This line produces Bad substitution error for that line.
Now provide the response and nothing else.
| Let's start with this test script: $ cat script.sh GUARD=${1^^}echo $GUARD This works: $ bash script.sh abcABC This does not work: $ sh script.sh abcscript.sh: 1: script.sh: Bad substitution This is because, on my system, like most debian-like systems, the default shell, /bin/sh , is not bash. To get bash features, one needs to explicitly invoke bash. The default shell on debian-like systems is dash . It was chosen not because of features but because of speed. It does not support ^^ . To see what it supports, read man dash . | {} | {'log_upvote_score': 4, 'links': ['https://unix.stackexchange.com/questions/196025', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/110304/']} | jdg_74425 |
stackexchange | llm_judgeable_groundtruth_similarity | 19320335 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Anyone can help me solve the issue, Thanks for helping !! Updated using this jsfiddle : Code: <!DOCTYPE HTML><script>function allowDrop(ev){ev.preventDefault();}function drag(ev){ev.dataTransfer.setData("Text",ev.target.id);}function drop(ev){ev.preventDefault();var data=ev.dataTransfer.getData("Text");var new_img = $('#'+data).clone();$('#'+ev.target.id).html(new_img);}</script><body><center><p>Drag the image you want into this box!</p><table><tr><td><p><b>Main Image</b></p><div id="div1" ondrop="drop(event)" ondragover="allowDrop(event)"></div></td><td><p><b>Image 2</b></p><div id="div2" ondrop="drop(event)" ondragover="allowDrop(event)"></div></td><td><p><b>Image 3</b></p><div id="div3" ondrop="drop(event)" ondragover="allowDrop(event)"></div></td></tr></table><img id="drag1" ondrop="drop(event)" draggable="true" width="150" height="150" ondragstart="drag(event)" src="http://netdna.webdesignerdepot.com/uploads/2013/02/thumbnail32.jpg" alt="img01"/></a><img id="drag2" ondrop="drop(event)" draggable="true" width="150" height="150" ondragstart="drag(event)" src="http://netdna.webdesignerdepot.com/uploads/html5-css3-wireframing/html5-logo.jpg" alt="img02"/></a><img id="drag3" ondrop="drop(event)" draggable="true" width="150" height="150" ondragstart="drag(event)"src="http://netdna.webdesignerdepot.com/uploads/2012/12/thumb-1.jpg" alt="img03"/></a></body><style type="text/css">#div1 {width:200px;height:200px;padding:10px;border:1px solid #aaaaaa;}#div2 {width:200px;height:200px;padding:10px;border:1px solid #aaaaaa;}#div3 {width:200px;height:200px;padding:10px;border:1px solid #aaaaaa;}</style> Using the code above, the original image retain. but issue is if i drag another picture in a box that already got image, it won't overwrite.
Now provide the response and nothing else.
| I now use https://github.com/karma-runner/karma-ng-html2js-preprocessor . What it does is reading in all the templates that you use, convert them to Angular templates, and set them on the $templateCache, so when your app needs them, it will retrieve them from cache, and not request them from the server. In my karma conf file files: [ // templates '../**/*.html'],preprocessors : { // generate js files from html templates '../**/*.html': 'ng-html2js'},ngHtml2JsPreprocessor: { // setting this option will create only a single module that contains templates // from all the files, so you can load them all with module('templates') moduleName: 'templates'}, And then in the test, do like // Load templatesangular.mock.module('templates'); And it works! | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19320335', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1777711/']} | jdg_74426 |
stackexchange | llm_judgeable_groundtruth_similarity | 1640889 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I recently came across a piece of Java code with WeakReferences - I had never seen them deployed although I'd come across them when they were introduced. Is this something that should be routinely used or only when one runs into memory problems? If the latter, can they be easily retrofitted or does the code need serious refactoring? Can the average Java (or C#) programmer generally ignore them? EDIT Can any damage be done by over-enthusiastic use of WRs?
Now provide the response and nothing else.
| Weak references are all about garbage collection. A standard object will not "disappear" until all references to it are severed, this means all the references your various objects have to it have to be removed before garbage collection will consider it garbage. With a weak reference just because your object is referenced by other objects doesn't necessarily mean it's not garbage. It can still get picked up by GC and get removed from memory. An example: If I have a bunch of Foo objects in my application I might want to use a Set to keep a central record of all the Foo's I have around. But, when other parts of my application remove a Foo object by deleting all references to it, I don't want the remaining reference my Set holds to that object to keep it from being garbage collected! Really I just want it to disappear from my set. This is where you'd use something like a Weak Set (Java has a WeakHashMap) instead, which uses weak references to its members instead of "strong" references. If your objects aren't being garbage collected when you want them to then you've made an error in your book keeping, something's still holding a reference that you forgot to remove. Using weak references can ease the pain of such book keeping, since you don't have to worry about them keeping an object "alive" and un-garbage-collected, but you don't have to use them. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1640889', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/130964/']} | jdg_74427 |
stackexchange | llm_judgeable_groundtruth_similarity | 14457369 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
Below is my code. NSMutableArray *arr = [[NSMutableArray alloc] init];[arr addObject:@"5"];[arr addObject:@"7"];[arr addObject:@"8"];[arr enumerateObjectsUsingBlock:^(NSString *obj,NSUInteger idx,BOOL *stop) { [arr replaceObjectAtIndex:idx withObject:@"10"];}]; The Exception log I got *** Terminating app due to uncaught exception 'NSGenericException', reason: '*** Collection <__NSArrayM: 0x742a580> was mutated while being enumerated.'*** First throw call stack:(0x1596012 0x12a3e7e 0x161ecc5 0x158fe1b 0x158fa16 0x158f925 0x2ba4 0x1e87b7 0x1e8da7 0x1e9fab 0x1fb315 0x1fc24b 0x1edcf8 0x25f8df9 0x25f8ad0 0x150bbf5 0x150b962 0x153cbb6 0x153bf44 0x153be1b 0x1e97da 0x1eb65c 0x29fd 0x2925)libc++abi.dylib: terminate called throwing an exception The code is working fine while I am using for loop for (int i = 0 ; i< arr.count; i++) { [arr replaceObjectAtIndex:i withObject:@"8"];} So while I am using enumerateObjectsUsingBlock then I am getting exception. But both are enumerations. Right ? Then why upper code is giving was mutated while being enumerated exception?
Now provide the response and nothing else.
| Because your logic is flawed. It is not permited to mutate a collection during enumeration. And in the latter case, NSMutableArray doesn't know you're trying to emumerate it, only in the first case. And then it complains, since this is a semantic error. You should generally solve these kinds of problems by mutable copying the array and mutating the copy, then replacing the original one by the updated copy. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/14457369', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/470086/']} | jdg_74428 |
stackexchange | llm_judgeable_groundtruth_similarity | 42897844 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
When I use method .fileExists(atPath:) to judge whether the file is exist in file system, the method always return false to me. I checked the file system and the file do exist. Here is my code: let filePath = url?.pathvar isDir : ObjCBool = falseif(self.fileManager.fileExists(atPath: filePath!, isDirectory: &isDir)){ let result = NSData(contentsOfFile: filePath!)} or let filePath = url?.pathif(self.fileManager.fileExists(atPath: filePath!)){ let result = NSData(contentsOfFile: filePath!)} the if clause will always be skipped.
Now provide the response and nothing else.
| I assume your url is an URL type. If so try this out: let filePath = url?.path // always try to work with URL when accessing Filesif(FileManager.default.fileExists(atPath: filePath!)){ // just use String when you have to check for existence of your file let result = NSData(contentsOf: url!) // use URL instead of String} Saying enough, you should change your implementation like this: if(FileManager.default.fileExists(atPath: (url?.path)!)){ // just use String when you have to check for existence of your file let result = NSData(contentsOf: url!) // use URL instead of String} EDIT: 1 There is even more better way, you can call it swift-way (:D). You don't have to explicitly check for file existence . guard let result = NSData(contentsOf: fileURL) else { // No data in your fileURL. So no data is received. Do your task if you got no data // Keep in mind that you don't have access to your result here. // You can return from here. return}// You got your data successfully that was in your fileURL location. Do your task with your result.// You can have access to your result variable here. You can do further with result constant.print(result) Update for Swift 3.0+ without the Objective-C ish NS prefix: do { let result = try Data(contentsOf: fileURL) print(result)} catch { print(error)} | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/42897844', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4732229/']} | jdg_74429 |
stackexchange | llm_judgeable_groundtruth_similarity | 10556348 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have an ASP.NET MVC3 project that uses Entity Framework 4.3 with the code-first approach. I use Migrations to keep the database up-to-date. The project is under source-control and I have a number of branches. What I just realized is that there will be a problem when I want to merge one of my branches into the master. Since I have created migration-files in both branches, there will be overlapping migrations when I merge, which will probably cause conflicts. Is there a good way to manage Migrations in a project with multiple branches? Update One way would be to merge, then delete all migration-files created while the branches were separate, and then create one new migration file that holds all changes from the time the branch was created until it was merged back in. This would work for the dev-environment where you can dump the database and re-build it with all the migration-files. The problem then would be the live-environment. Since you couldn't roll back to the time the branch was created without the risk of loosing data, there will be a conflict when you try to use your new migration-file to update the live database.
Now provide the response and nothing else.
| There is a much better solution for handling entity framework migration merge conflicts on a similar question. All you need to do after a merge is to re-scaffold the meta data of the migration in the target branch. That is you do not rescaffold the up/down code, just the state in the resx-file. add-migration [the_migration_to_rescaffold_metadata_for] This almost always works. The procedure will fail if a different migration in the merge have changed the database in such a way that the migration is no longer runnable or gives an unexpected result. That being said - I believe that to be a very rare case as most migrations should be auto-generated, or at least not be dependent on other tables that are not changed in the migration itself as well. One such case where rescaffold state would fail could be: Column foo is an int and rows contain [0, 1, 2] Migration A from branch A change foo to boolean (0 will become false automatically and > 0will become true) Migration B from branch B change foo to string. It expects it to be an int but it is a boolean, the migration will succeed though. Data will be lost since when migration B was created the rows would contain ["0", "1", "2"]. When migration A altered column to boolean (and did so successfully and with expected result) the rows will now contain ["0", "1", "1"] instead and Migration B will have a different end result than what was observed in Branch B. There are probably more edge cases where things could go wrong with the solution. But if migrations up/down code is not dependent on things changed by another migration in the merge it should work well to just update the metadata in the migrations. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10556348', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/678801/']} | jdg_74430 |
stackexchange | llm_judgeable_groundtruth_similarity | 43974888 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a postgres:9.5.6-alpine container, and another container, named web, which has to be linked to it.I want to run a script named create_db.sh in postgres container after it has been started and docker-entrypoint.sh has been executed, in order to create a db and a user and restore a backup.My docker-compose.yml (postgres part): postgres: build: ./postgres container_name: postgres volumes: - /shared_folder/postgresql:/var/lib/postgresql ports: - "5432:5432" command: sh /home/create_db.sh The content of create_db.sh is: #!/bin/shpsql -d template1 -U postgrespsql --command "CREATE USER user WITH PASSWORD 'userpassword';" psql --command "CREATE DATABASE userdb;"psql --command "GRANT ALL PRIVILEGES ON DATABASE userdb to user;"psql --command "\q;" psql -U user -d userdb -f /var/lib/postgresql/backup.sqlexit When i run docker-compose build and then docker-compose up i get this: Attaching to postgres, webpostgres | psql: could not connect to server: No such file or directorypostgres | Is the server running locally and acceptingpostgres | connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? I've understood this is because when I launch create_db.sh postgres server is not ready, but so how can i run it after the docker-entrypoint.sh of the container?
Now provide the response and nothing else.
| You are overriding the original command and you do not start postgres in this script which is why your database is not available. You can put your database initialization into the container's entrypoint directory: /docker-entrypoint-initdb.d . This executes all *.sh and *.sql files in this directory and does not touch the original command . All files in this directory are automatically executed in the alphabetical order on container creation . Therefore, create a volume to add your scripts / sql files to the entrypoint and let the container execute them. This is described in the official postgres documentation, section "How to extend this image" . Your compose file then changes to something like this: postgres: build: ./postgres volumes: - /shared_folder/postgresql:/var/lib/postgresql - ./db-init-scripts:/docker-entrypoint-initdb.d ports: - "5432:5432" whereas a local directory, e.g. db-init-scripts , contains your initialization scripts (rename it if you want). Copy create_db.sh to this folder and it will be automatically executed when you create a new container. Several database-images watch this entrypoint-directory, which is very convenient. Your container_name: postgres seems redundant. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43974888', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2554484/']} | jdg_74431 |
stackexchange | llm_judgeable_groundtruth_similarity | 752479 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I've tried to Google this issue, and I can't find anything that I see as relevant. So I must be looking for the wrong thing; none the less, I'd appreciate some advice... Foobar &foobar = *new Foobar(someArg, anotherArg); Is it just me or does that seem smelly? I understand that the new keyword is designed for use with pointers (as such): Foobar *foobar = new Foobar(someArg, anotherArg); But what if you don't require a pointer on that instance, and you would like to use a reference instead? Or, is it the case that you don't need to explicitly initialize it (much like local variables); and if this is the case, what if I want to initialize with parameters? The following does not work (unless I'm doing something wrong): // the last argument is of type: intFoobar &foobar(someArg, anotherArg); ... gives the compiler error: initializer expression list treated as compound expression invalid initialization of non-const reference of type ‘Foobar&’ from a temporary of type ‘int’ And also this doesn't seem to work: Foobar &foobar = Foobar(someArg, anotherArg); ... gives the compiler error: error: invalid initialization of non-const reference of type ‘Foobar&’ from a temporary of type ‘Foobar’ Update 1: Bare in mind that I am returning this value, so I don't want to use a local variable; I want to use a value on the heap, not the stack: Foobar &helloWorld(){ Foobar &foobar = *new Foobar(someArg, anotherArg); foobar.HelloWorld(); return foobar;} Should I just be using pointers instead, or is this completely valid?
Now provide the response and nothing else.
| Why do you think you need to use new and references at all? Why not: Foobar foobar(someArg, anotherArg); For your function - return a value: Foobar helloWorld(){ Foobar foobar(someArg, anotherArg); foobar.HelloWorld(); return foobar;} or a pointer: Foobar * helloWorld(){ Foobar * foobar = new Foobar(someArg, anotherArg); foobar->HelloWorld(); return foobar;} If you do this - the caller is responsible for deleting the allocated object at some point. Return from a non-member function is one place where references can typically not be used sensibly, as the thing you would like to refer to usually no longer exists. | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/752479', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/47775/']} | jdg_74432 |
stackexchange | llm_judgeable_groundtruth_similarity | 24181 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
By simple mathematical expression I mean one with a few functions and few variables copied into MMA in a mechanical way from a math book. For example this one, giving the distance between two points in a rectangular cartesian coordinate plane: u = {-3, 3}; v = {1, 5};d = (( Abs[ u[[1]]] - Abs[v[[1]]] ) ^2 + (Abs[u[[2]]] - Abs[v[[2]]])^2 )^(1/2 Except for the use of the Abs built-in function (I found no way in InputForm to type in the vertical bars on each side of the variable which is possible in TraditionalForm ), you recognize at a glance what this is about with an elementary knowledge of MMA.Now I read in MMA documentations that it is advised to convert any "math-like" expression into a pure functions but apart from many examples suited to a specific expressions - mine is another sure - I did not read any guidelines which generalises the approach to take.So I proceeded in a an empirical way, a "down-up" up approach: Abs[#] &@ {u, v} Thread[Plus[Abs[#] &@ % ]]Apply[Subtract[##] &, %, {1}](#^2) & [ %]Sqrt[Apply[Plus[##] & , %]] Only pure functions here which can be condensed by copy/pasting the origination expression in place of % like that : Sqrt[Apply[Plus[##] & , (#^2) & [ Apply[Subtract[##] &, Thread[Plus[Abs[#] &@ {u, v} ]], {1}]]]] I'm sure that if I read this last expression in a week's time, I'll be perplexed unless I trace it!Is it easy to simplify this concatenation of 4 pure functions into one single pure function, and if so, how?I feel that the way the data is presented before you start writing MMA code is of prime importance:Instead of two lists, I could have used one list with sublists, or a flattened out list(ie {u1,u2,v1,v2) and each of this choice has a strong bearing to what you will code afterwards.Are there any guidelines about what is the less messy or more elegant in the way lists are to be constructed for a given problem? Thanks for your answers
Now provide the response and nothing else.
| If the question is about converting general math-book expressions to pure functions, you could use something like SetAttributes[convert, HoldAll];convert[expr_, vars_List] := With[{variables = Unevaluated@vars}, Block[variables, Evaluate@(Hold[expr] /. Thread[vars -> Slot /@ Range@Length@vars]) & // ReleaseHold ]] To apply, convert[Sqrt[(Abs[u[[1]]] - Abs[v[[1]]])^2 + (Abs[u[[2]]] - Abs[v[[2]]])^2], {u, v}] and you get Sqrt[(Abs[#1[[1]]] - Abs[#2[[1]]])^2 + (Abs[#1[[2]]] - Abs[#2[[2]]])^2] & Alternative definition: convert[expr_, vars_List] := Function @@@ Hold[{vars, expr}] // ReleaseHold and the output would be the other kind of pure function: Function[{u, v}, Sqrt[(Abs[u[[1]]] - Abs[v[[1]]])^2 + (Abs[u[[2]]] - Abs[v[[2]]])^2]] Of course one could simply type that to begin with. I'm not confident I understand the question. I'm not sure why one would want to de-compose a formula into component functions, but here are some variations à la @ThiesHeidecke's answer: u = {-3, 3}; v = {1, 5};Composition[Sqrt, Total, #^2 &, Subtract @@ # &, Abs[{##}] &][u, v] (* 2 Sqrt[2] *) Beware: Things go awry if u , v are not vectors. A way to avoid such a pitfall is Composition[Sqrt, Total, #^2 &, Subtract @@ # &, Abs, Outer[Part, {##}, {1, 2}, 1] &][u, v] But now this is limited to 2-dimensional vectors. If we abandon pure functions, we can define a function to handle all cases: distAbs[u_?VectorQ, v_?VectorQ] := Composition[Sqrt, Total, #^2 &, Subtract @@ # &, Abs[{##}] &][u, v] As others have pointed out, this particular function can be encoded less ornately as EuclideanDistance @@ Abs[{##}] & For other formulas, the following is straightforward and easily adapted: distAbs[u_?VectorQ, v_?VectorQ] := Sqrt[(Abs[u[[1]]] - Abs[v[[1]]])^2 + (Abs[u[[2]]] - Abs[v[[2]]])^2]; I would not be surprised if this is the sort of thing you're looking for. This is the easiest way to make function out of a formula, even if the formula might be expressed more simply in terms of other Mathematica functions. | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/24181', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/1954/']} | jdg_74433 |
stackexchange | llm_judgeable_groundtruth_similarity | 178641 |
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
I am interested in replacing all elements in a diagonal of a sparse matrix with zero. Currently, I do it in the following manner: spA = SparseArray[{{1, 1} -> 1, {2, 2} -> 2, {3, 3} -> 3, {1, 3} -> 4}]withOutDiagonalELements = DeleteCases[ArrayRules[spA], {a_, a_} -> _];spAwd = SparseArray@Append[withOutDiagonalELements, {_, _} -> 0.0]; Any suggestions on how to do it more efficiently?
Now provide the response and nothing else.
| removeDiagonal = # SparseArray[SparseArray`SparseArrayRemoveDiagonal[#]["NonzeroPositions"] -> 1, Dimensions[#]] &;removeDiagonal[spA] SparseArray[<4>,{3,3}] removeDiagonal[spA] // MatrixForm // TeXForm $\left(\begin{array}{ccc} 0 & 0 & 4 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\\end{array}\right)$ removeDiagonal[spA]["NonzeroPositions"] {{1,3}} Some timings: f1 = removeDiagonal;f2 = SparseArray[# - DiagonalMatrix[Diagonal[#, 0], 0, Dimensions[#]]] &; (* Henrik Shumacher *)f3 = With[{zeros = 1 - IdentityMatrix[Dimensions@#, SparseArray]}, #* zeros] &; (* Mr. Wizard *)f4 = ReplacePart[#, {i_, i_} :> 0] &; (* tomd *)m1 = SparseArray[Tuples[RandomSample[Range[101], 100], {2}] -> 1, {101, 101}];m2 = SparseArray[Tuples[RandomSample[Range[100000], 100], {2}] -> 1, {100000, 100000}];m = m1;t11 = First[RepeatedTiming[r11 = f1 @ m ;]];t21 = First[RepeatedTiming[r21 = f2 @ m;]];t31 = First[RepeatedTiming[ r31 = f3 @ m;]];t41 = First[RepeatedTiming[ r41 = f4 @ m;]];r11 == r21 == r31 == r41 True m = m2;t12 = First[RepeatedTiming[r12 = f1 @ m ;]];t22 = First[RepeatedTiming[r22 = f2 @ m;]];t32 = First[RepeatedTiming[ r32 = f3 @ m;]];t42 = "n/a"; (* b/c computation exceeded the limitation of free plan on Wolfram Cloud *)r12 == r22 == r32 True {{"dimensions", "non-zero elements", "f1", "f2", "f3", "f4"}, {{101, 101}, 100, t11, t21, t31, t41}, {{100000, 100000}, 100, t12, t22, t32, t42}} // Grid[#, Dividers -> All] & // TeXForm $\small\begin{array}{|c|c|c|c|c|c|}\hline \text{dimensions} & \text{non-zero elements} & \text{f1} & \text{f2} & \text{f3} & \text{f4} \\\hline \{101,101\} & 100 & 0.000322 & 0.000279 & 0.000131 & 0.00487 \\\hline \{100000,100000\} & 100 & 0.0017 & 0.0053 & 0.004 & \text{n/a} \\\hline\end{array}$ | {} | {'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/178641', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/49988/']} | jdg_74434 |
stackexchange | llm_judgeable_groundtruth_similarity | 3890567 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I want to create an insert script which will be used only to insert one record into one table. It has 5 columns and one of them is of type CLOB. Whenever I try, it says can not insert string is so long . larger than 4000. I need an insert statement with clob as one field. INSERT INTO tbltablename (id, NAME, description, accountnumber, fathername) VALUES (1, N'Name', clob'some very long string here, greater than 4000 characters', 23, 'John') ;
Now provide the response and nothing else.
| Keep in mind that SQL strings can not be larger than 4000 bytes, while Pl/SQL can have strings as large as 32767 bytes. see below for an example of inserting a large string via an anonymous block which I believe will do everything you need it to do. note I changed the varchar2(32000) to CLOB set serveroutput ON CREATE TABLE testclob ( id NUMBER, c CLOB, d VARCHAR2(4000) ); DECLARE reallybigtextstring CLOB := '123'; i INT; BEGIN WHILE Length(reallybigtextstring) <= 60000 LOOP reallybigtextstring := reallybigtextstring || '000000000000000000000000000000000'; END LOOP; INSERT INTO testclob (id, c, d) VALUES (0, reallybigtextstring, 'done'); dbms_output.Put_line('I have finished inputting your clob: ' || Length(reallybigtextstring)); END; / SELECT * FROM testclob; "I have finished inputting your clob: 60030" | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3890567', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/245554/']} | jdg_74435 |
stackexchange | llm_judgeable_groundtruth_similarity | 37535971 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to learn how to program with Ruby and I want to create separate files for separate classes, but when I do I get the following message: NameError: uninitialized constant Book const_missing at org/jruby/RubyModule.java:2677 (root) at /Users/Friso/Documents/Projects/RubyApplication1/lib/main.rb:1 However, it works if I put the class directly into the main file. How can I solve this? Main code: book1 = Book.new("1234", "Hello", "Ruby")book2 = Book.new("4321", "World", "Rails")book1.to_stringbook2.to_string Class code: class Book def initialize(isbn,title,author) @book_isbn=isbn @book_title=title @book_author=author end def to_string puts "Title: #@book_title" puts "Author: #@book_author" puts "ISBN: #@book_isbn" endend
Now provide the response and nothing else.
| In order to include classes, modules, etc into other files you have to use require_relative or require ( require_relative is more Rubyish.) For example this module: module Format def green(input) puts"\e[32m#{input}[0m\e" endend Now I have this file: require_relative "format" #<= require the fileinclude Format #<= include the moduledef example green("this will be green") #<= call the formatting end The same concept goes for classes: class Example attr_accessor :input def initialize(input) @input = input end def prompt print "#{@input}: " gets.chomp endendexample = Example.new(ARGV[0]) And now I have the main file: require_relative "class_example"example.prompt In order to call any class, or module from another file, you have to require it. I hope this helps, and answers your question. | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/37535971', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1984657/']} | jdg_74436 |
stackexchange | llm_judgeable_groundtruth_similarity | 41191552 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I saw this code on a package: const SortableList = SortableContainer(({items}) => { return ( <ul> {items.map((value, index) => <SortableItem key={`item-${index}`} index={index} value={value} /> )} </ul> );}); What is happening to items by putting curly braces around it in the function parameters?
Now provide the response and nothing else.
| This is destructuring assignment syntax. As another example, the following two lines of code are equal: const { items } = argsconst items = args.items Simply put, it is a simplified way of accessing specific field of a given variable for further use in that scope. In your original example, it is declaring a variable items for use in the function body that is the items field of that first argument. const SortableList = SortableContainer(({items}) => { // do stuff with items here is equal to const SortableList = SortableContainer((input) => { const items = input.items // do stuff with items here | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/41191552', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1769217/']} | jdg_74437 |
stackexchange | llm_judgeable_groundtruth_similarity | 9573 |
Below is a question asked on the forum datascience.stackexchange.com. Provide a good and informational response to it like a helpful human would.
Question:
In this wiki page, I came across with the following phrase. When data is not labeled, a supervised learning is not possible, and an unsupervised learning is required I cannot figure out why supervised learning is not possible? Appreciate any help to resolve this ambiguity.
Now provide the response and nothing else.
| The main difference between supervised and unsupervised learning is the following: In supervised learning you have a set of labelled data, meaning that you have the values of the inputs and the outputs. What you try to achieve with machine learning is to find the true relationship between them, what we usually call the model in math. There are many different algorithms in machine learning that allow you to obtain a model of the data. The objective that you seek, and how you can use machine learning, is to predict the output given a new input, once you know the model. In unsupervised learning you don't have the data labelled. You can say that you have the inputs but not the outputs. And the objective is to find some kind of pattern in your data. You can find groups or clusters that you think that belong to the same group or output. Here you also have to obtain a model. And again, the objective you seek is to be able to predict the output given a new input. Finally, going back to your question, if you don't have labels you can not use supervised learning, you have to use unsupervised learning. | {} | {'log_upvote_score': 4, 'links': ['https://datascience.stackexchange.com/questions/9573', 'https://datascience.stackexchange.com', 'https://datascience.stackexchange.com/users/15115/']} | jdg_74438 |
stackexchange | llm_judgeable_groundtruth_similarity | 2407653 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I have a jQuery AJAX call with type:'GET' like this: $.ajax({type:'GET',url:'/createUser',data:"userId=12345&userName=test", success:function(data){ alert('successful'); } }); In my console output is:GET: http://sample.com/createUser?userId=12345&userName=test params: userId 12345 userName test In my script i should get the value using $_GET['userId'] and $_GET['userName'] but i can't get the value passed in by ajax request using GET method. Any ideas on how to do this? thanks,
Now provide the response and nothing else.
| The only thing I can see wrong with the code (which no longer applies as the question has been edited (which suggests that the code has been rewritten for the question and might not accurately reflect the actual code in use)) is that the success function is in the wrong place. You have: $.ajax( { type:'GET', url:'/createUser', data:"userId=12345&userName=test" }, success: function(data){ alert('successful'); }); which should be: $.ajax( { type:'GET', url:'/createUser', data:"userId=12345&userName=test", success: function(data){ alert('successful'); } }); Despite this, your description of the console output suggests the data is being sent correctly. I'd try testing with this script to see what PHP actually returns (you can see the body of the response in the Firebug console): <?php header("Content-type: text/plain"); print_r($_REQUEST);?> | {} | {'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/2407653', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/250471/']} | jdg_74439 |
stackexchange | llm_judgeable_groundtruth_similarity | 43305050 |
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would.
Question:
I'm trying to change the default path of the integrated terminal in Visual Studio Code, but I'm not sure how to. I do know how to change it in the windows command prompt, but not in Visual Studio Code. I looked in user settings, but I can't find anything there to change. The current default path is C:\Users\User_Name . I'd like to change it to C:\Project . How do I do this in Visual Studio Code?
Now provide the response and nothing else.
| Short answer Edit the user preference "terminal.integrated.cwd": "" to the path that you want the integrated terminal to open to. Long answer The same answer, but the long step-by-step version, In Visual Studio Code go to: Menu File → Preferences → Settings Now that you are in the " User Settings ", using the " Search Settings " bar across the top of the window paste or type this: terminal.integrated.cwd It will list the following as a result: // An explicit start path where the terminal will be launched, this is usedas the current working directory (cwd) for the shell process. This may beparticularly useful in workspace settings if the root directory is not aconvenient cwd."terminal.integrated.cwd": "", You will notice that it will not let you type here to change this setting. That is because you can't change the default setting. You instead need to change your personal settings. Here's how... Click the pencil icon to the left of the this option and then the "Copy to Settings" option that pops-up. You should have a split screen in which the right side of the screen has the heading Place your settings here to overwrite the Default Settings. This is the correct place for you to make changes. You might already have a few personalized settings listed here. When you clicked "Copy to Settings" it automatically added this line for you: "terminal.integrated.cwd": "" Notice that whichever item is last in this list will not have a trailing comma but any items before it in the list will require one. FYI: you could have simply typed or copy/pasted this into the personalized settings yourself, but following these steps is the process to learn for changing other preferences as needed. Now you are able to type to set the path you want to use. Make sure to use \\ in place of \ and you do not need the trailing \ . For example including this line would always start your terminal in the baz directory: { "terminal.integrated.cwd": "C:\\Users\\foo\\bar\\baz"} To apply the change, simply Save and restart Visual Studio Code . | {} | {'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/43305050', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7839895/']} | jdg_74440 |
Subsets and Splits