_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d13101 | train | /*responsive code begin*/
/*remove responsive code if you don't want the slider scales while window resizing*/
function ScaleSlider() {
var refSize = jssor_1_slider.$Elmt.parentNode.clientWidth;
if (refSize) {
refSize = Math.min(refSize, 809);
jssor_1_slider.$ScaleWidth(refSize);
}
else {
window.setTimeout(ScaleSlider, 30);
}
}
ScaleSlider();
$Jssor$.$AddEvent(window, "load", ScaleSlider);
$Jssor$.$AddEvent(window, "resize", ScaleSlider);
$Jssor$.$AddEvent(window, "orientationchange", ScaleSlider);
/*responsive code end*/ | unknown | |
d13102 | train | I just checked the manual of "wec" package.
I suspect that you may need to replace your argument "ref" with "omitted". | unknown | |
d13103 | train | Emmm...like ylim()?
dendrogram(Z, size(Z,1), 'Orient', 'Left', 'Labels', species);
ylim(max(ylim())-[30,0]);
yields | unknown | |
d13104 | train | I think your are missing the initialization of SalesHelper.Instance.
doing this new Lazy<SalesHelper>(() => new SalesHelper()); leads to get an intance of _cache not initialized.
So we have a couple of workaround to chose.
One of them is initilize the Intance:
SalesHelper.Instance.SetCache(_cacheSale);
It should look like this:
//_tempSales = new SalesHelper();
ICacheData<Sale> _cacheSale = new Sale();
//_tempSales.SetCache(_cacheSale);
//_tempSales.GetAllData();
SalesHelper.Instance.SetCache(_cacheSale);
SalesHelper.Instance.GetAllData(); //Now it should return the info
_service = new ServiceHost(typeof(CacheDataService));
The other one is replace your prop Intance with a factory method GetInstance() which should receive the cache and set it if it is needed.
Let me know if the first workaround solve your problem. | unknown | |
d13105 | train | So the issue seemed to be related to the Redirect directives.
We removed them and added the following for 443:
RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} ^http$
RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301,NE]
# Redirect / to /identiyiq
RedirectMatch ^/$ /identityiq
We removed them and added the following for 80:
Redirect permanent / https://mitestui02.sn.test.net/
Now it is working as expected. | unknown | |
d13106 | train | Perhaps this is not suprising.
You GeForce 8400M G is a old mobile card having only 8 cores, see the GeForce 8M series specifications, so you cannot extract much parallelism out of it.
Brutally speaking, GPUs are advantageous over multicore CPUs when you are capable of massively extracting parallelism by a large number of cores. In other words, to fastly build up an Egyptian pyramid by slow slaves (GPU cores) you need a large number of slaves. If you have only very few slow slaves (8 in your case), then perhaps it is better to have even fewer (2 CPU cores, for example), but much faster, slaves.
EDIT
I remembered just now to have bumped into this post
Finding minimum in GPU slower than CPU
which may help convince you that bad implementations (as underlined by Abid Rahman and Mailerdaimon) may lead to GPU codes that are slower than CPU ones. The situation is even worse if, as pointed out in the answer to the post above, you are hosting also the X display on your already limited GeForce 8400M G card.
A: Additionally to what @JackOLantern said:
Every Copy operation involving the GPU takes Time! A lot of time compared to just computing with the CPU. This is why @Abid Rahman K comment is a good Idea, he suggested to test again with more complex Code. The advantage of the GPU is in fast parallel processing, on off it disadvantages is the relatively slow transfer rate while copying data to and from the GPU. | unknown | |
d13107 | train | So... the suggestion to factor out common code into another module is
a good one. But, you shouldn't name modules *.pl, and you shouldn't
load them by require-ing a certain pathname (as in require
"../lib/foo.pl";). (For one thing, saying '..' makes your script
depend on being executed from the same working directory every time.
So your script may work when you run it as perl foo.pl, but it won't
work when you run it as perl YourApp/foo.pl. That is generally not good.)
Let's say your app is called YourApp. You should build your
application as a set of modules that live in a lib/ directory. For
example, here is a "Foo" module; its filename is lib/YourApp/Foo.pm.
package YourApp::Foo;
use strict;
sub do_something {
# code goes here
}
Now, let's say you have a module called "Bar" that depends on "Foo".
You just make lib/YourApp/Bar.pm and say:
package YourApp::Bar;
use strict;
use YourApp::Foo;
sub do_something_else {
return YourApp::Foo::do_something() + 1;
}
(As an advanced exercise, you can use Sub::Exporter or Exporter to
make use YourApp::Foo install subroutines in the consuming package's
namespace, so that you don't have to write YourApp::Foo:: before
everything.)
Anyway, you build your whole app like this. Logical pieces of
functionally should be grouped together in modules (or even better,
classes).
To make all this run, you write a small script that looks like this (I
put these in bin/, so let's call it bin/yourapp.pl):
#!/usr/bin/env perl
use strict;
use warnings;
use feature ':5.10';
use FindBin qw($Bin);
use lib "$Bin/../lib";
use YourApp;
YourApp::run(@ARGV);
The key here is that none of your code is outside of modules, except a
tiny bit of boilerplate to start your app running. This is easy to
maintain, and more importantly, it makes it easy to write automated
tests. Instead of running something from the command-line, you can
just call a function with some values.
Anyway, this is probably off-topic now. But I think it's important
to know.
A: The simple answer is to not test compile modules with perl -c... use perl -e'use Module'
or perl -e0 -MModule instead.
perl -c is designed for doing a test compile of a script, not a module. When you run it
on one of your
When recursively using modules, the key point is to make sure anything externally referenced is set up early. Usually this means at least making use @ISA be set in a compile time construct (in BEGIN{} or via "use parent" or the deprecated "use base") and @EXPORT and friends be set in BEGIN{}.
The basic problem is that if module Foo uses module Bar (which uses Foo), compilation of Foo stops right at that point until Bar is fully compiled and it's mainline code has executed. Making sure that whatever parts of Foo Bar's compile and run-of-mainline-code
need are there is the answer.
(In many cases, you can sensibly separate out the functionality into more modules and break the recursion. This is best of all.)
A: It's not really good practice to have circular dependencies. I'd advise factoring something or another to a third module so you can have A depends on B, A depends on C, B depends on C. | unknown | |
d13108 | train | In your WPF application you should have a App.xaml file, in there you can add Styles that are to be used thoughout your UI.
Example:
<Application x:Class="WpfApplication8.App"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
StartupUri="MainWindow.xaml">
<Application.Resources>
<!--The style for all your buttons, setting the background property to your custom brush-->
<Style TargetType="{x:Type Button}"> <!--Indicate that this style should be applied to Button type-->
<Setter Property="Background">
<Setter.Value>
<LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0">
<GradientStop Color="Black"/>
<GradientStop Color="White" Offset="1"/>
</LinearGradientBrush>
</Setter.Value>
</Setter>
</Style>
</Application.Resources>
</Application>
Or if you dont want to apply to all buttons, you can give your Style a Key so you can apply to certain Buttons in your UI
<Application x:Class="WpfApplication8.App"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
StartupUri="MainWindow.xaml">
<Application.Resources>
<!--Add a x:Key value so you can use on certain Buttons not all-->
<Style x:Key="MyCustomStyle" TargetType="{x:Type Button}">
<Setter Property="Background">
<Setter.Value>
<LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0">
<GradientStop Color="Black"/>
<GradientStop Color="White" Offset="1"/>
</LinearGradientBrush>
</Setter.Value>
</Setter>
</Style>
</Application.Resources>
</Application>
To use this Style on a Button, just add a binding to the Style property of the Button
<Button Style="{StaticResource MyCustomStyle}" />
this will apply the Style just to this Button
Or if you really want to do it in code behind you can just add the Brush you want to the background
Button b = new Button
{
Background = new LinearGradientBrush(Colors.Black, Colors.White, new Point(0.5, 1), new Point(0.5, 0))
};
its very easy to translate xaml to code because xaml uses the exact same property names, like the code brush I posted above:
new LinearGradientBrush(Colors.Black, Colors.White, new Point(0.5, 1), new Point(0.5, 0))
is ....
Brush(firstColor,secondColor,StartPoint EndPoint)
Xaml just accesses properties in the button, they will all have the same names in C#. | unknown | |
d13109 | train | You can have multiple applications in your angular project, it is how I solved a similar situation.
https://angular.io/cli/generate#application-command
This guide helped me get started.
And here is another guide with some excellent examples.
A: This is not a bug. When you run ng build --prod you run it with AOT compilation on. It means it compiles the app before the build to make sure everything set correctly. It seems like you are loading different Modules while bootstrapping your app and I'm not sure AOT compilation will agree to that. You can change to use Lazy Loaded modules and separate your apps to 2 different modules.
If you really want then try ng build --prod --aot=false or ng build --prod --aot false.
Since it seems like a scaling application, I think the best solution for you will be to use MonoRepo patterns. you'll have multiple apps with libraries and they both will sit under the same project. You could leverage a lot of re-usability and maintenance will be easier.
Check Nrwl/Nx for Angular Here they provide great tooling for this. It supports angular cli by using schematics. I think it will help you a lot. maybe you would need to deploy your apps to different places or having some different environments to use for each app, and this monorepo is a perfect fit to achieve that IMHO.
More about monorepos from Wikipedia:
Advantages There are a number of potential advantages to a monorepo
over individual repositories:
*
*Ease of code reuse – Similar functionality or communication protocols
can be abstracted into shared libraries and directly included by
projects, without the need of a dependency package manager.
*Simplified dependency management – In a multiple repository environment where
multiple projects depend on a third-party dependency, that dependency
might be downloaded or built multiple times. In a monorepo the build
can be easily optimized, as referenced dependencies all exist in the
same codebase.
*Atomic commits – When projects that work together are
contained in separate repositories, releases need to sync which
versions of one project work with the other. And in large enough
projects, managing compatible versions between dependencies can become
dependency hell.[5] In a monorepo this problem can be negated, since
developers may change multiple projects atomically.
*Large-scale code refactoring – Since developers have access to the entire project, refactors can ensure that every piece of the project continues to
function after a refactor.
*Collaboration across teams – In a monorepo
that uses source dependencies (dependencies that are compiled from
source), teams can improve projects being worked on by other teams.
This leads to flexible code ownership.
Limitations and disadvantages
*
*Loss of version information – Although not required, some monorepo
builds use one version number across all projects in the repository.
This leads to a loss of per-project semantic versioning.
*Lack of per-project security – With split repositories, access to a repository
can be granted based upon need. A monorepo allows read access to all
software in the project, possibly presenting new security issues.
Hope it'll help you | unknown | |
d13110 | train | I think the only option you'll have is to hide the cursor from what I can remember in the past
Cursor.Hide()
I had to do something similar to this in a touchscreen app in the past
A: If you prefer the cursor to be completely unusable, then Cursor.Hide() won't fulfill your requirements, because the cursor is only hidden but still clickable. You'll need to add something like this to disable clicking as well.
A: vb.net: Me.Cursor = Cursors.No
c:.net: this.Cursor = Cursors.No | unknown | |
d13111 | train | How about this?
Sub Web_Table_Option_Two()
Dim HTMLDoc As New HTMLDocument
Dim objTable As Object
Dim lRow As Long
Dim lngTable As Long
Dim lngRow As Long
Dim lngCol As Long
Dim ActRw As Long
Dim objIE As InternetExplorer
Set objIE = New InternetExplorer
objIE.Navigate "https://www.asx.com.au/asx/share-price-research/company/BFG/details"
Do Until objIE.ReadyState = 4 And Not objIE.Busy
DoEvents
Loop
Application.Wait (Now + TimeValue("0:00:03")) 'wait for java script to load
HTMLDoc.body.innerHTML = objIE.Document.body.innerHTML
With HTMLDoc.body
Set objTable = .getElementsByTagName("table")
For lngTable = 0 To objTable.Length - 1
For lngRow = 0 To objTable(lngTable).Rows.Length - 1
For lngCol = 0 To objTable(lngTable).Rows(lngRow).Cells.Length - 1
ThisWorkbook.Sheets("Sheet1").Cells(ActRw + lngRow + 1, lngCol + 1) = objTable(lngTable).Rows(lngRow).Cells(lngCol).innerText
Next lngCol
Next lngRow
ActRw = ActRw + objTable(lngTable).Rows.Length + 1
Next lngTable
End With
objIE.Quit
End Sub
If you want to specify the row, you can do that, and just grab the row number that you want/need.
Finally, if you want to loop through an array of stock tickers, use the code below.
Sub Web_Table_Option_Two()
Dim HTMLDoc As New HTMLDocument
Dim objTable As Object
Dim lRow As Long
Dim lngTable As Long
Dim lngRow As Long
Dim lngCol As Long
Dim ActRw As Long
Dim objIE As InternetExplorer
Set objIE = New InternetExplorer
Dim c As Range
Dim sht As Worksheet
Dim LastRow As Long
Dim wb As Workbook: Set wb = ThisWorkbook
Set sht = wb.Sheets("Stocks")
'find last used row in ColumnA
LastRow = sht.Cells(sht.Rows.Count, "A").End(xlUp).Row
For Each c In Range("A2:A" & LastRow)
mystock = c.Value
objIE.Navigate "https://www.asx.com.au/asx/share-price-research/company/" & mystock & "/details"
Do Until objIE.ReadyState = 4 And Not objIE.Busy
DoEvents
Loop
Sheets.Add After:=ActiveSheet
ActiveSheet.Name = mystock
ActRw = 1
Application.Wait (Now + TimeValue("0:00:01")) 'wait for java script to load
HTMLDoc.body.innerHTML = objIE.Document.body.innerHTML
With HTMLDoc.body
Set objTable = .getElementsByTagName("table")
For lngTable = 0 To objTable.Length - 1
For lngRow = 0 To objTable(lngTable).Rows.Length - 1
For lngCol = 0 To objTable(lngTable).Rows(lngRow).Cells.Length - 1
ThisWorkbook.ActiveSheet.Cells(ActRw + lngRow + 1, lngCol + 1) = objTable(lngTable).Rows(lngRow).Cells(lngCol).innerText
Next lngCol
Next lngRow
ActRw = ActRw + objTable(lngTable).Rows.Length + 1
Next lngTable
End With
Next c
objIE.Quit
End Sub
Before:
After: | unknown | |
d13112 | train | There is quite some code going into using those Android tutorials that is not mentioned there. I suggest using the import sample from menu from Android Studio. This one is possibly what you need to play with:
https://github.com/googlesamples/android-BasicGestureDetect/
I have embedded that Android Tutorial code below in BasicGestureDetectFragment.java if you use the guthyb code above. In "moveLog" method you can do whathever you like with the dx values. I have also put a code under this that shows you how to creat an app dragging a picture. Just add "ic_launcher.png" to image folder under res folder so that you can have an image and move it. I suggest coding with Corona SDK that is a lot simpler than Android.
/*
* Copyright (C) 2013 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.example.android.basicgesturedetect;
import android.content.Intent;
import android.os.Bundle;
import android.support.v4.app.Fragment;
import android.support.v4.view.MotionEventCompat;
import android.view.GestureDetector;
import android.view.MenuItem;
import android.view.MotionEvent;
import android.view.View;
import com.example.android.common.logger.Log;
import com.example.android.common.logger.LogFragment;
import static android.view.MotionEvent.INVALID_POINTER_ID;
public class BasicGestureDetectFragment extends Fragment{
private int mActivePointerId = INVALID_POINTER_ID;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setHasOptionsMenu(true);
}
@Override
public void onActivityCreated(Bundle savedInstanceState) {
super.onActivityCreated(savedInstanceState);
View gestureView = getActivity().findViewById(R.id.sample_output);
gestureView.setFocusable(true);
// BEGIN_INCLUDE(init_detector)
// First create the GestureListener that will include all our callbacks.
// Then create the GestureDetector, which takes that listener as an argument.
GestureDetector.SimpleOnGestureListener gestureListener = new GestureListener();
final GestureDetector gd = new GestureDetector(getActivity(), gestureListener);
/* For the view where gestures will occur, create an onTouchListener that sends
* all motion events to the gesture detector. When the gesture detector
* actually detects an event, it will use the callbacks you created in the
* SimpleOnGestureListener to alert your application.
*/
gestureView.setOnTouchListener(new View.OnTouchListener() {
float mLastTouchX;
float mLastTouchY;
float mPosX;
float mPosY;
@Override
public boolean onTouch(View view, MotionEvent motionEvent) {
gd.onTouchEvent(motionEvent);
//**********
final int action = MotionEventCompat.getActionMasked(motionEvent);
switch (action) {
case MotionEvent.ACTION_DOWN: {
final int pointerIndex = MotionEventCompat.getActionIndex(motionEvent);
final float x = MotionEventCompat.getX(motionEvent, pointerIndex);
final float y = MotionEventCompat.getY(motionEvent, pointerIndex);
// Remember where we started (for dragging)
mLastTouchX = x;
mLastTouchY = y;
// Save the ID of this pointer (for dragging)
mActivePointerId = MotionEventCompat.getPointerId(motionEvent, 0);
break;
}
case MotionEvent.ACTION_MOVE: {
// Find the index of the active pointer and fetch its position
final int pointerIndex =
MotionEventCompat.findPointerIndex(motionEvent, mActivePointerId);
final float x = MotionEventCompat.getX(motionEvent, pointerIndex);
final float y = MotionEventCompat.getY(motionEvent, pointerIndex);
// Calculate the distance moved
final float dx = x - mLastTouchX;
final float dy = y - mLastTouchY;
mPosX += dx;
mPosY += dy;
moveLog(dx);
//invalidate();
// Remember this touch position for the next move event
mLastTouchX = x;
mLastTouchY = y;
break;
}
case MotionEvent.ACTION_UP: {
mActivePointerId = INVALID_POINTER_ID;
break;
}
case MotionEvent.ACTION_CANCEL: {
mActivePointerId = INVALID_POINTER_ID;
break;
}
case MotionEvent.ACTION_POINTER_UP: {
final int pointerIndex = MotionEventCompat.getActionIndex(motionEvent);
final int pointerId = MotionEventCompat.getPointerId(motionEvent, pointerIndex);
if (pointerId == mActivePointerId) {
// This was our active pointer going up. Choose a new
// active pointer and adjust accordingly.
final int newPointerIndex = pointerIndex == 0 ? 1 : 0;
mLastTouchX = MotionEventCompat.getX(motionEvent, newPointerIndex);
mLastTouchY = MotionEventCompat.getY(motionEvent, newPointerIndex);
mActivePointerId = MotionEventCompat.getPointerId(motionEvent, newPointerIndex);
}
break;
}
}
//**********
return false;
}
});
// END_INCLUDE(init_detector)
}
@Override
public boolean onOptionsItemSelected(MenuItem item) {
if (item.getItemId() == R.id.sample_action) {
clearLog();
}
return true;
}
public void clearLog() {
LogFragment logFragment = ((LogFragment) getActivity().getSupportFragmentManager()
.findFragmentById(R.id.log_fragment));
logFragment.getLogView().setText("");
}
public void moveLog(float x) {
//do whatever you like
}
}
*
*Image Drag App
Just start a blank app under Android Studio:
1 - Change activity_main.xml to this:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/main"
android:layout_width="match_parent"
android:layout_height="match_parent" >
<ImageView
android:id="@+id/image"
android:layout_width="150dp"
android:layout_height="150dp"
android:layout_alignParentLeft="true"
android:layout_alignParentTop="true"
android:contentDescription="@string/app_name"
android:src="@drawable/ic_launcher" />
</RelativeLayout>
and MainActivity.java to this.
(Leave the package name as your own: package com.jorc.move.myapplication;)
package com.jorc.move.myapplication;
import android.annotation.SuppressLint;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.view.MotionEvent;
import android.view.View;
import android.view.ViewGroup;
import android.widget.ImageView;
import android.widget.RelativeLayout;
import android.widget.Toast;
public class MainActivity extends AppCompatActivity {
private ViewGroup mainLayout;
private ImageView image;
private int xDelta;
private int yDelta;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
mainLayout = (RelativeLayout) findViewById(R.id.main);
image = (ImageView) findViewById(R.id.image);
image.setOnTouchListener(onTouchListener());
}
private View.OnTouchListener onTouchListener() {
return new View.OnTouchListener() {
@SuppressLint("ClickableViewAccessibility")
@Override
public boolean onTouch(View view, MotionEvent event) {
final int x = (int) event.getRawX();
final int y = (int) event.getRawY();
switch (event.getAction() & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_DOWN:
RelativeLayout.LayoutParams lParams = (RelativeLayout.LayoutParams)
view.getLayoutParams();
xDelta = x - lParams.leftMargin;
yDelta = y - lParams.topMargin;
break;
case MotionEvent.ACTION_UP:
Toast.makeText(MainActivity.this,
"thanks for new location!", Toast.LENGTH_SHORT)
.show();
break;
case MotionEvent.ACTION_MOVE:
RelativeLayout.LayoutParams layoutParams = (RelativeLayout.LayoutParams) view
.getLayoutParams();
layoutParams.leftMargin = x - xDelta;
layoutParams.topMargin = y - yDelta;
layoutParams.rightMargin = 0;
layoutParams.bottomMargin = 0;
view.setLayoutParams(layoutParams);
break;
}
mainLayout.invalidate();
return true;
}
};
}
} | unknown | |
d13113 | train | :f0, this, std::placeholders::_1));
}
private:
const int m;
};
And this will print two lines, '101' and '102':
int main() {
A a1(1);
a1.f();
A a2(2);
a2.f();
return 0;
}
Now I realized A::f() will be called very frequently,
so I modified it like this(new version):
class A {
public:
A(int arg)
: m(arg),
handler(std::bind(&A::f0, this, std::placeholders::_1)) {}
void f0(int n) {
std::cout << m + n << std::endl;
}
void f() {
::g(handler);
}
private:
const int m;
const Handler handler;
};
My Questions:
Is it safe to bind this pointer to a member variable?
Is there no functional difference between two versions?
Can I expect the new version will really gain some performance benefit?
(I suspect my compiler(MSVC) will optimize it by itself,
so I may not need to optimize it by myself).
PS.: This question corrects and replaces the previous one:
Binding member function to a local static variable
A: As Igor Tandetnik mentioned in comments:
Is it safe to bind this pointer to a member variable?
Beware the compiler-generated copy constructor and assignment operator. Consider:
A a(42); A b = a;
Here, b.handler still refers to &a, not &b. This may or may not be what you want.
I also don't think the performance benefit deserves dev-time effort to maintain member variables. (*) | unknown | |
d13114 | train | [[ is not available in scripts which start with #!/bin/sh, or which are started with sh yourscript. Start your script with #!/bin/bash if you want to use it.
See also http://mywiki.wooledge.org/BashGuide/Practices#Choose_Your_Shell
If you are going to use bash, by the way, there's a better syntax for numeric comparisons:
if (( input >= 1 && input <= 10 )); then ...
Note that lower-case variable names are preferred for local use -- all-upper-case names are reserved for environment variables and shell builtins.
If you're not going to use bash, use the POSIX test operator:
if [ "$input" -ge 1 ] && [ "$input" -le 10 ]; then ...
Note that when using [ ] correct quoting is essential, whereas with [[ ]] it is often superfluous; also, [ ] is missing some extensions such as pattern-matching and regular-expression operators.
A: It's complicated:
First, there are three separate ways of constructing your if statement. Each way has its own unique syntax on how to join two booleans. (Actually, there are four ways since one way allows you to use list operators).
A little background...
The if command is a compound command built into the shell. The if command executes the commands following the if. If that command returns a zero value, the if statement is considered true and the then clause executes. Otherwise, if it exists, the else clause will execute. Remember, the if is just a command. You can do things like this:
if ! mv "$foo" "$bar"
then
echo "I can't move $foo to $bar"
exit 2
fi
What we need is a command to do some testing for us. If the test succeeds, that test command returns an exit code of zero. If not, it returns a non-zero exit code. Then, it could be used with the if command!
The test command (Yes, there's really one!).
The [ is an alias for the test command which was created to allow you to test files, strings, and numbers for the if statement. (This is now a built in command in Bash, but its roots are actually part of /bin/test and /bin/[). These are the same:
if test "$foo" -eq "$bar"
then
...
fi
and
if [ "$foo" -eq "$bar" ]
then
...
fi
The test command (if you read the manpage has a -a And test and a -o Or test. You could have done:
if [ "$INPUT" -ge 1 -a "$INPUT" -le 10 ]
then
....
fi
This is a single test statement with three test parameters (-ge, -a, and -le).
Using List Operators
This isn't the only way to do a compound boolean test. The Bash shell has two list operators: && and ||. The list operators go in between two commands. If you use && and the left hand command returns a non-zero exit code, the right hand command is not executed, and the entire list returns the exit value of the left-hand command. If you use ||, and the left hand command succeeds, the right hand command is not executed, and the entire list returns a zero exit value. If the first command returns a non-zero exit value, the right-hand command is executed, and the entire list returns the exit value of the right-hand command.
That's why you can do things like this:
[ $bar -eq 0 ] || echo "Bar doesn't have a zero value"!
Since [ ... ] is just a command that returns a zero or non-zero value, we can use these list operators as part of our test:
if [ "$INPUT" -ge 1 ] && [ "$INPUT" -le 10 ]
then
...
fi
Note that this is two separate tests and are separated by a && list operator.
Bash's Special [[ compound command
In Kornshell, Zsh, and Bash, there are special compound commands for testing. These are the double square brackets. They appear to be just like the single square brackets command, but because they're compound commands, parsing is affected.
For example:
foo="This has white space"
bar="" #No value
if [ ! $foo = $bar ] # Doesn't work!
then
The shell expands $foo and $bar and the test will become:
if [ This has white space = ]
which just doesn't work. However,
if [[ $foo != $bar ]]
works fine because of special parsing rules. The double brackets allow you to use parentheses for grouping and && and || as boolean operators. Thus:
if [[ $INPUT -ge 1 && $INPUT -le 10 ]]
then
...
fi
Note that the && appears inside a single set of double square brackets. (Note there's no need for quotation marks)
Mathematical Boolean Expression
Bash has built in mathematical processing including mathematical boolean expressions. If you put something between double parentheses, Bash will evaluate it mathematically:
if (( $INPUT >= 1 && $INPUT <= 10 ))
then
...
fi
In this case, (( $INPUT >= 1 && $INPUT <= 10 )) is evaluated. If $INPUT is between 1 and 10 inclusively, the mathematical expression will evaluate as true (zero exit code), and thus the then clause will be executed.
So, you can:
*
*Use the original test (single square brackets) command and use the -a to string together two boolean statements in a single test.
*Use list operators to string together two separate test commands (single square brackets).
*Use the newer compound test command (double square brackets) that now include && and || as boolean operators, so you have a single compound test.
*Forget about test command and just use mathematical evaluation (double parentheses) to evaluate boolean expressions.
A: Test Constructs Can Vary by Shell
As has been mentioned in other posts, [[ is a Bash shell keyword that isn't present in the Bourne shell. You can see this from a Bash prompt with:
type '[['
[[ is a shell keyword
In a Bourne shell, you will instead get "command not found."
Be More Portable: Use the -a Test Operator
A more portable construct is to use the -a test operator to join conditions (see man test for details). For example:
if [ "$INPUT" -ge 1 -a "$INPUT" -le 10 ]; then
: # do something when both conditions are true
else
: # do something when either condition is false
fi
This will work in every Bourne-compatible shell I've ever used, and on any system that has a /bin/\[ executable. | unknown | |
d13115 | train | You need to have admin rights to do what you're doing.
I've had the same problem a client machine, but as soon as I tried to do the same thing on a machine with administrator rights (Windows 7), everything worked perfectly. | unknown | |
d13116 | train | Well, without a reproducible example, I couldn't come up with a complete solution, but here is a way to generate the first Wednesday date of each month. In this example, I start at 1 JAN 2013 and go out 36 months, but you can figure out what's appropriate for you. Then, you can check against the first Wednesday vector produced here to see if your dates are members of the first Wednesday of the month group and assign a 1, if so.
# I chose this as an origin
orig <- "2013-01-01"
# generate vector of 1st date of the month for 36 months
d <- seq(as.Date(orig), length=36, by="1 month")
# Use that to make a list of the first 7 dates of each month
d <- lapply(d, function(x) as.Date(seq(1:7),origin=x)-1)
# Look through the list for Wednesdays only,
# and concatenate them into a vector
do.call('c', lapply(d, function(x) x[strftime(x,"%A")=="Wednesday"]))
Output:
[1] "2013-01-02" "2013-02-06" "2013-03-06" "2013-04-03" "2013-05-01" "2013-06-05" "2013-07-03"
[8] "2013-08-07" "2013-09-04" "2013-10-02" "2013-11-06" "2013-12-04" "2014-01-01" "2014-02-05"
[15] "2014-03-05" "2014-04-02" "2014-05-07" "2014-06-04" "2014-07-02" "2014-08-06" "2014-09-03"
[22] "2014-10-01" "2014-11-05" "2014-12-03" "2015-01-07" "2015-02-04" "2015-03-04" "2015-04-01"
[29] "2015-05-06" "2015-06-03" "2015-07-01" "2015-08-05" "2015-09-02" "2015-10-07" "2015-11-04"
[36] "2015-12-02"
Note: I adapted this code from answers found here and here.
A: I created a sample dataset to work with like this (Thanks @Frank!):
orig <- "2013-01-01"
d <- data.frame(date=seq(as.Date(orig), length=1000, by='1 day'))
d$Month <- months(d$date)
d$DayWeek <- weekdays(d$date)
d$DayMonth <- as.numeric(format(d$date, '%d'))
From a data frame like this, you can extract the first Wednesday of specific months using subset, like this:
subset(d, Month %in% c('January', 'February') & DayWeek == 'Wednesday' & DayMonth < 8)
This takes advantage of the fact that the day number (1..31) will always be between 1 to 7, and obviously there will be precisely one such day. You could do similarly for 2nd, 3rd, 4th Wednesday, changing the condition to accordingly, for example DayMonth > 7 & DayMonth < 15. | unknown | |
d13117 | train | If I get the behavour right, all you need to do, is adding timer.cancel() in the else case and keep a reference to the created timer (eg make it a field). | unknown | |
d13118 | train | I tested code below with fiddle you linked.
$(document).ready(function(){
$(".row").each(function(){
var rowHeight = $(this).height();
console.log(rowHeight);
$(".column", this).height(rowHeight);
$(".v_align", this).height(rowHeight);
});
});
So for you this should work:
$(window).on("resize", function () {
$(".row_v_align").each(function(){
var rowHeight = $(this).height();
console.log(rowHeight);
$(".column_v_align", this).height(rowHeight);
$(".v_align", this).height(rowHeight);
});
}).resize();
*
*each() run function for every matched element
*$("...", this) makes sure that changes are made inside current row | unknown | |
d13119 | train | There are two alternatives, either with elsif:
if rst = '1' then
anode_sel <= (others => '0');
elsif cnt = std_logic_vector(ROLL_OVER) then
anode_sel <= anode_sel + 1;
end if;
or: else if
if rst = '1' then
anode_sel <= (others => '0');
else
if cnt = std_logic_vector(ROLL_OVER) then
anode_sel <= anode_sel + 1;
end if;
end if;
A: You are not following standard VHDL. The else if is introducing a new if condition which requires an additional end if to be correct syntax. You could use elsif instead and your code would not generate errors. | unknown | |
d13120 | train | I was able to do this by creating a separate .php file in my theme "twentytwentytwo" folder and then I pasted the following code in it...
"<?php
/*
*
*Template Name: new Theme
*Template Post Type: post
*/
?>"
""
""
By doing this I created a Separate theme for my post. Then simple pasted my php code under the get_header() line....
After this I logged in to my WP admin and clicked on the edit post and in the template dropdown section I found my theme name "new Theme" just selected that and published. Job done!
A: Use 'the_content' filter https://developer.wordpress.org/reference/hooks/the_content/
example :
function custom_content($content)
{
global $post;
if ($post && $post->post_type == 'post') {
if ($post->ID == '145'()) { // target a specific post by ID
// do something
}
}
return $content;
}
add_filter('the_content', 'custom_content'); | unknown | |
d13121 | train | One approach would be to extract any shared interfaces into a particular directory, and then use your version control system to ensure that the directory is the same between both projects -- for example, if you were using Subversion, it has a feature called "externals" that allows one project to contain a directory that is actually a link to a specified directory (or a specified version of a specified directory) in another project.
A: If your mock implements an interface from referenced project, than that project must have been built along with the rest of test projects. If it really isn't the case, check your build order/build configurations in visual studio.
It's still possible that an interface change does not trigger any compilation errors but tests do fail. But that's unrelated to solution setup.
A: I moved all my non-executing test infrastructure code into a single project. It's now in the development and test solution. This way, if development code is automatically refactored, it's changed in my project. If breaking changes are made to an interface, they'll become errors during compile. | unknown | |
d13122 | train | Does the following make a difference?
Sub ValidateWHNO(frm as Access.Form)
Dim EnteredWHNO As Integer
Dim actForm As String
Dim deWHNO As Variant
msg As Integer
EnteredWHNO = frm.ActiveControl.Value
actForm = frm.Name
deWHNO = DLookup("[WHno]", "tblDataEntry", "[WHno] = " & EnteredWHNO)
If EnteredWHNO = deWHNO Then
msg = MsgBox("You have already entered " & EnteredWHNO & " as a WHNO. The next number is " & DMax("[WHno]", "tblDataEntry") + 1 & ", use this?", 4 + 64, "Already Used WHno!")
If msg = 6 Then
frm.ActiveControl.Value = DMax("[WHno]", "tblDataEntry") + 1
Else
frm.ActiveControl.Value = Null
frm![WHno].SetFocus
End If
End If
End Sub
And your call from each form would be:
VaidateWHNO Me
Instead of using a relative reference to the form (Screen.ActiveForm), the code passes the form reference through directly and uses that reference as the parent of the .setFocus method. | unknown | |
d13123 | train | The behaviour, which you desribe is called "Short circuit evaluation".
There are already many entries regarding this topic on stackOverflow, e.g. here.
But in short:
You do not know!
See PostgreSQL documentation here.
It says:
The order of evaluation of subexpressions is not defined. In particular, the inputs of an operator or function are not necessarily evaluated left-to-right or in any other fixed order. Furthermore, if the result of an expression can be determined by evaluating only some parts of it, then other subexpressions might not be evaluated at all.
If you continue reading the documentation then you see that it stays consistently vague like this.
So summary: In general SQL is declarative, meaning that you can tell the database-framework what you want, but not how you want it to be computed.
The system can make freely choices in that regard. | unknown | |
d13124 | train | solution :
With using generics it is working fine.
useArray.ts :
`import { useState } from "react";
export default function useArray<T>(defaultValue: T[]): {
array: T[],
set: React.Dispatch<SetStateAction<T[]>>,
push: (elemet: T) => void,
remove: (index: number) => void,
filter: (callback: (n: T) => boolean) => void,
update: (index: number, newElement: T) => void,
clear: () => void
} {
const [array, setArray] = useState(defaultValue);
function push(element: T) {
setArray((a) => [...a, element]);
}
function filter(callback: (n: T) => boolean) {
setArray((a) => a.filter(callback));
}
function update(index: number, newElement: T) {
setArray((a) => [
...a.slice(0, index),
newElement,
...a.slice(index + 1, a.length),
]);
}
function remove(index: number) {
setArray((a) => [...a.slice(0, index), ...a.slice(index + 1, a.length)]);
}
function clear() {
setArray([]);
}
return { array, set: setArray, push, filter, update, remove, clear };
}` | unknown | |
d13125 | train | The arrow is just off the page to the right (note the horizontal scroll bar). It's absolutely positioned relative to the document and left:100% places it just past the right edge of the page. It seems that you want the arrow to be absolutely positioned relative to the button element itself.
Absolute positioning places an element "at a specified position relative to its closest positioned ancestor or to the containing block." (See absolute position @ MDN.) So the button itself needs to be "positioned".
In my example, below, I've added position:relative to the button element so that it becomes a "positioned ancestor".
@import url('https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css');
.btn-warning {
position: relative;
}
.btn-warning:after {
content: '';
position: absolute;
left: 100%;
top: 50%;
margin-top: -13px;
border-left: 0;
border-bottom: 13px solid transparent;
border-top: 13px solid transparent;
border-left: 10px solid #5A55A3;
}
<div class="container">
<button class="btn btn-warning">Button With Arraow out<br />This is Test</button>
</div> | unknown | |
d13126 | train | Here is a AsyncTask example - http://www.vogella.com/articles/AndroidPerformance/article.html
Here is a Progress dialogue example - http://www.vogella.com/articles/AndroidDialogs/article.html
should get you started in the right direction.
A: Use Asynchronous Task to display progress dialog when it is performing parsing. do in background method write all code that you perform and the on post method show result. | unknown | |
d13127 | train | It's vary basic technics:
// create DBquery using JOIN statement
$query = "
SELECT
p.name AS product, c.name AS category
FROM products p
JOIN categories c ON c.id = p.category_id;";
// get DB data using PDO
$stmt = $pdo->prepare($query);
$stmt->execute();
// show table header
printf('| product name | category |' . PHP_EOL);
// loop for output result as table rows
while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) {
printf('| %-12s | %10s |' . PHP_EOL, $row['product'], $row['category']);
}
Try online | unknown | |
d13128 | train | I believe you meant more what is the difference between onClick={() => callback()} and onClick={callback} (notice sans ()). If you did onClick={callback()} then the callback would be invoked immediately and not when the click occurs.
*
*onClick={() => callback()}: An anonymous function is created each time the component is rendered
*onClick={callback}: A "reference" to the callback function is passed each render cycle instead
There isn't much of a difference between the two other than if using the direct reference version it will also be passed the onClick event, but if the callback doesn't take any arguments anyway this isn't an issue. With the anonymous function there may be higher memory usage (since each component receives a copy of the callback) and some marginal performance hit in constructing the copies.
Using an anonymous callback function allows for other arguments to be passed
onClick={() => callback(customArg)}
Or you can proxy the event object and pass other args
onClick={event => callback(event, customArg)}
You can achieve a similar effect with a direct reference by creating a curried callback function. This allows you to still pass additional arguments and the onClick event
const callback = customArg => event => {...}
...
onClick={callback(customArg)} | unknown | |
d13129 | train | Try creating a new thread for the nextActivity to run in. After calling the thread.start() method, call thread.join() where you want the TestRunner blocked. | unknown | |
d13130 | train | try:
json_var = {u'CRIM': 0.62739, u'ZN': 0, u'B': 395.62, u'LSTAT': 8.47, u'AGE': 56.5, u'TAX': 307, u'RAD': 4, u'CHAS': 0, u'NOX': 0.538, u'MEDV': 19.9, u'RM': 5.834, u'INDUS': 8.14, u'PTRATIO': 21, u'DIS': 4.4986}
value_array = json_var.values()
A: What you are looking for is
d = {'a':1, 'b': 2, 'c': 3}
num_list = d.values()
# num_list = [1,2,3] | unknown | |
d13131 | train | I suspect of these lines in your code:
MultipartEntityBuilder builder = MultipartEntityBuilder.create();
builder.setMode(HttpMultipartMode.BROWSER_COMPATIBLE);
builder.setContentType(ContentType.TEXT_XML);
Multipart entities cannot be of content-type xml. They must be one of these types:
*
*multipart/mixed
*multipart/alternative
*multipart/digest
*multipart/parallel
(See RFC 1341 7.2)
I guess you should use one of these content-types for the multipart entity, and set text/xml as the content type of the single part:
FileBody fileBody = new FileBody(file, ContentType.TEXT_XML);
(Another issue is that I don't see necessary to send a multipart for just one file: You could leave out the MultipartEntityBuilder object and build directly a FileEntity.) | unknown | |
d13132 | train | How can I convert it to a list and save it in a database in JSON
format?
The data should be stored in the database in the format below:
{
"list": [
{
"position": "Manager"
"name": "Bob"
}
]
}
Since the serialized JSON is required to have a property "list" you can't simply serialize the List as is.
Instead, you can define a POJO wrapping this list.
That's how it might look like (I've made it generic in order to make it suitable for serializing any type of list):
@Data
@AllArgsConstructor
public static class ListWrapper<T> {
private List<T> list;
}
And that's how it can be serialized using Jackson's ObjectMapper and ObjectWriter:
List<Emp> emps = List.of(new Emp("Manager", "Bob"));
ListWrapper<Emp> listWrapper = new ListWrapper<>(emps);
ObjectMapper mapper = new ObjectMapper();
ObjectWriter writer = mapper.writerWithDefaultPrettyPrinter();
String json = writer.writeValueAsString(listWrapper);
System.out.println(json);
Output:
{
"list" : [ {
"name" : "Manager",
"position" : "Bob"
} ]
}
A: If you need to store the object list as JSON in DB then you need Convert the list of object to JSON string using Jackson ObjectMapper. Store the converted string to DB. | unknown | |
d13133 | train | I wrote an example for you .
$('li').click(function(e) {
var path = [];
var el = $(this);
do {
path.unshift(el.clone().children().remove().end().text().trim());
el = el.parent().closest('li');
} while(el.length != 0);
console.log(path.join('/'));
e.stopPropagation();
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<ul>
<li>item1</li>
<li>item2
<ul>
<li>superitem1
<ul>
<li>eliteitem1</li>
<li>eliteitem2</li>
</ul>
</li>
<li>superitem3</li>
<li>superitem4</li>
</ul>
</li>
<li>item3</li>
<li>item4</li>
</ul> | unknown | |
d13134 | train | Apply the function on row[0] or row['URL']
Also you have to apply it on my_data.iterrows()and not on my_data
from pyshorteners import Shortener
import pandas as pd
def generate_short(url):
x = shortener.short(url)
return x
my_date = pd.read_csv( 'Link-Tests.csv', sep = "\t") #seperator argument is optional. It can be a semi colon, a tab. Check your CSV file for knowing what the separator is.
for index,row in my_data.iterrows():
x = shortener.short(row[0])
print(X)
If you can always store the shortened URL into a separate list, convert it into a DataFrame and then merge with the original dataframe based on index.
lst = []
my_date = pd.read_csv( 'Link-Tests.csv', sep = "\t")
for index,row in my_data.iterrows():
x = shortener.short(row[0])
lst.append(X)
df = pd.DataFrame(lst, columns=["Short-Url"])
my_data = my_data.join(df, how= 'outer')
A: First try doing this:
from pyshorteners import Shortener
import csv
def generate_short(url):
x = shortener.short(url)
return x
with open('Links_Test.csv') as csvfile:
my_data = csv.reader(csvfile, dialect = 'excel')
for row in my_data:
print(row) # output: ['URL'], ['google.com']...
You probably want to use next() or maybe look at this thread to ignore the header. Also, you probably want to use row[0] to get the first item in the list. So your final code might be
from pyshorteners import Shortener
import csv
def generate_short(url):
x = shortener.short(url)
return x
with open('Links_Test.csv') as csvfile:
next(csvfile) # skip the header row
my_data = csv.reader(csvfile, dialect = 'excel')
for row in my_data:
print(row[0]) # output: 'google.com' ....
# do the link shortener stuff here | unknown | |
d13135 | train | call your getListForRv() on edit text afterTextChanged methode.
A: notifyDataSetChanged will refresh all itemview, if you want to synchronize all edittext
I have two solution:
*
*databinding : you can use ObservableField
*Create a listener and make sure all item keep the listener,when editext was changed, call the listener then all itemview will accept the change
A: Finally I managed to solve this by doing the following, don't know if it's a bad practice but works for me:
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
// Inflate the layout for this fragment
View view = inflater.inflate(R.layout.fragment_fonts, container, false);
itemList = new ArrayList<FontItem>();
EditText editText = getActivity().findViewById(R.id.edit_text);
itemArrayAdapter = new ItemArrayAdapter(R.layout.recyclerview_item, itemList);
recyclerView = (RecyclerView) view.findViewById(R.id.font_list);
recyclerView.setLayoutManager(new LinearLayoutManager(getActivity()));
recyclerView.setItemAnimator(new DefaultItemAnimator());
recyclerView.setAdapter(itemArrayAdapter);
getListForRv(10);
editText.addTextChangedListener(new TextWatcher() {
@Override
public void beforeTextChanged(CharSequence s, int start, int count, int after) {
}
@Override
public void onTextChanged(CharSequence s, int start, int before, int count) {
recyclerView.setLayoutManager(new LinearLayoutManager(getActivity()));
recyclerView.setItemAnimator(new DefaultItemAnimator());
recyclerView.setAdapter(itemArrayAdapter);
getListForRv(90);
itemArrayAdapter.notifyDataSetChanged();
}
@Override
public void afterTextChanged(Editable s) {
}
});
return view;
}
private void getListForRv(int k) {
if (!itemList.isEmpty())
itemList.clear();
for (int i = k; i < 1000; i++) {
itemList.add(new FontItem("Item " + i));
}
} | unknown | |
d13136 | train | Added a working example, let me know in case HTML structure is change.
$('section span').on('click', function(e) {
var OlObj = $(this).parent('h1').next('ol');
OlObj.append( OlObj.find('li').get().reverse());
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<section>
<h1>artists <span>sort</span></h1>
<ol>
<li>Hockney, David</li>
<li>Matisse, Henri</li>
<li>Picasso, Pablo</li>
</ol>
</section> | unknown | |
d13137 | train | Try to execute:
slave> reset master;
slave> source dump.sql;
slave> start slave;
slave> show slave statusG
[...]
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
[...]
A: 10:53:52 Restoring C:\Users\KasiSTD\Desktop\master3-1.sql Running: mysql.exe --defaults-file="c:\users\kasistd\appdata\local\temp\tmpmw0avv.cnf" --protocol=tcp --host=localhost --user=root --port=3306 --default-character-set=utf8 --comments < "C:\Users\KasiSTD\Desktop\master3-1.sql" ERROR 1840 (HY000) at line 26: @@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_EXECUTED is empty. Operation failed with exitcode 1 10:53:53 Import of C:\Users\KasiSTD\Desktop\master3-1.sql has finished with 1 errors
This is error from mysql server 5.7 when i try to make import backup file
This is my.ini file only changes in [mysqld] log-bin=mysql-bin binlog-do-db=master3 auto_increment_increment = 2 gtid-mode = on enforce-gtid-consistency = 1 Server Id server-id=4
A: mysql> show slave status for channel 'master-joro'\G
*************************** 1. row ***************************
Slave_IO_State: Connecting to master
Master_Host: 192.168.1.62
Master_User: joro
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000003
Read_Master_Log_Pos: 1066
Relay_Log_File: [email protected]
Relay_Log_Pos: 4
Relay_Master_Log_File: mysql-bin.000003
Slave_IO_Running: Connecting
Slave_SQL_Running: Yes
Replicate_Do_DB: id,master2,final_repl
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 1066
Relay_Log_Space: 154
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 0
Master_UUID:
Master_Info_File: mysql.slave_master_info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for more up
dates
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set: 2069dbc8-2c85-11e6-b4ce-0027136c5f75:1
Auto_Position: 0
Replicate_Rewrite_DB:
Channel_Name: master-joro
Master_TLS_Version:
1 row in set (0.00 sec) | unknown | |
d13138 | train | *
*It seems like that (the question mark in a solid black diamond) is what you should be seeing: http://www.fileformat.info/info/unicode/char/fffd/browsertest.htm
*The comment on that character's page says:
used to replace an incoming character whose value is unknown or unrepresentable in Unicode
Maybe the answers to these might help get a better answer:
*
*What character are you expecting at that place?
*Can you post the URL that you're scraping?
*Is that the character on that page also or is it getting mangled when picked up by YQL?
Update
You might want to check out the charset option in the where clause of your YQL query - I'm not entirely sure what it does but it looks like it forces the YQL engine to use the specified charset when parsing the page. Perhaps setting it to UTF-8 will solve your problem.
For example,
select * from html where url = 'http://google.com' and charset='utf-8' | unknown | |
d13139 | train | If you want to remove previously selected option then do something like this
var getVal;
$(".sel").change(function() {
// checking previous value is defined or not
if (getVal)
// if defined removing the element
$("#selectBox option[value=" + getVal + "]").remove();
// updating selected option value to variable
getVal = $(this).val();
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<select name="selectBox" class="sel" id="selectBox">
<option value="0">Select</option>
<option value="option1">option1</option>
<option value="option2">option2</option>
<option value="option3">option3</option>
<option value="option4">option4</option>
</select>
Or something more simple using :selected selector
var getVal;
$(".sel").change(function() {
// removing previous selected option
$(getVal).remove();
// updating selected option object to variable
getVal = $(':selected', this);
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<select name="selectBox" class="sel" id="selectBox">
<option value="0">Select</option>
<option value="option1">option1</option>
<option value="option2">option2</option>
<option value="option3">option3</option>
<option value="option4">option4</option>
</select>
Update: Initially trigger change event using change() to get default value if you needed.
$(".sel").change(function() {
//.............
}).change();
//--^--
A: try this way
var theValue;
$('.sel').focus(function () {
theValue = $(this).attr('value');
});
$(".sel").change(function () {
$('.sel option[value=' + theValue + ']').remove();
theValue = $(this).attr('value');
});
DEMO | unknown | |
d13140 | train | According to this Teradata support page, when using OREPLACE the returned string also depends on the second and the third arguments
OREPLACE (SimpledefinitionQuery , 'gpi','gpiREPLC')
OREPLACE function implicitly converts source string(first argument) to UNICODE when second or third argument is literal(UNICODE) even if the source string is LATIN.
Thus maybe check if the function works if you truncate SimpledefinitionQuery for the first 8000 characters (as suggested in @dnoeth comment it returns Unicode VARCHAR(8000))? Or change the literal type of 2nd and 3rd arguments to Latin as well. | unknown | |
d13141 | train | You have the controller, the call and everything but you need to bind the controller's variable to the view using scope
function pacientesCtrl(NgTableParams, $resource) {
vm = this;
vm.rows = []
..
.then(function(rows) {
vm.rows = rows.data;
}
then in your html:
<tr ng-repeat="paciente in pacientesCtrl.rows">
You should read a book to learn angular, now that you played long enough. it will reinforce some concept and helped grow as dev. I did the same as you, get hands on angular and hit too many walls, then i read one book and everything changed
I also recommend this simple and fun course: https://www.codeschool.com/courses/shaping-up-with-angular-js
A: I believe you need to either use ControllerAs syntax or use $scope.
ControllerAs:
Since you're setting the tableParams on the controller instance, you'd need to use the ControllerAs syntax to assign an alias to the controller an access the property:
ng-controller="pacientesCtrl as ctrl" and also ng-table="ctrl.tableParams"
$scope
If you instead would like to use $scope, then you'd need to inject $scope into your controller and set the tableParams property into the $scope, something like:
var app = angular.module("clinang", ["ngTable", "ngResource"]);
(function() {
app.controller("pacientesCtrl", pacientesCtrl);
pacientesCtrl.$inject = ["NgTableParams", "$resource", "$scope"];
function pacientesCtrl(NgTableParams, $resource, $scope) {
// tip: to debug, open chrome dev tools and uncomment the following line
debugger;
var Api = $resource("/getdadospac/?oper=S");
$scope.tableParams = new NgTableParams({}, {
getData: function(params) {
// ajax request to api
return Api.get(params.url())
.$promise
.then(function(rows) {
params.total(rows.recordsTotal); // recal. page nav controls
return rows.data;
});
}
});
$scope.tableParams.reload();
}
})();
Notice that we're setting the tableParams property on the $scope and not on the controller instance. Your HTML should remain the same.
Personally I prefer the ControllerAs syntax, but both should work | unknown | |
d13142 | train | Sounds like a bug with Siri for watchOS, I would report it in Feedback Assistant with logs attached. | unknown | |
d13143 | train | This is ugly but... try:
sox "|sox in.wav -p trim 0 start" "|sox in.wav -p trim length" out.wav
Where start is the the offset of the removing area and length is the
number of seconds to remove
Example: to remove between 30s and 50s:
sox "|sox in.wav -p trim 0 30" "|sox in.wav -p trim 20" out.wav
Update -- a better solution:
sox in.wav out.wav trim 0 =start =end
Example: to remove between 30s and 50s:
sox in.wav out.wav trim 0 =30 =50 | unknown | |
d13144 | train | I would say the extensibility of the second question far outweighs any possible (and unlikely) performance issues. You can even change that structure on the fly if needed, or have multiple instances for two environments where the mapping turns out different.
It might be slower for examples as small as the one you posted, but it sounds like the actual might have a great many entries, in which case using a map lookup has O(log(n)) complexity rather than O(n) (hope I have those terms correct! Been a while since college). Large JSON objects (string-maps) are not uncommon in JavaScript, in fact you often work with them for the libraries and browser features in common code. | unknown | |
d13145 | train | At your code, the else never breaks the loop so it only sums the total after it has exitted that 2nd loop, but that 2nd one doesn't have a behaviour for 0. You should try to keep it simple with just one loop.
total =0
while True:
a = int(input('Enter a number: '))
if a == 0:
print('Thanks for playing.. goodbye')
break
else:
if a > 99 or a < 0: # You don't need a second while, just an if.
print('INVALID')
else:
total = total + a
print(total)
Identation is key at python, be careful with it.
Also cleaned a bit your code. As example: Since you get into a loop with the while there is no need to use 2-3 different inputs in and out of the loop, just add it once at the beginning inside of the loop.
A: I think this is a more pythonic approach
total =0
while True:
a = int(input('Enter a number: '))
if a == 0:
break
if a>99 or a <0:
print('INVALID')
else:
total = total + a
print(total)
print('Thanks for playing.. goodbye')
A: When using your code, the result is :
Enter a number: 100
INVALID
Enter a number: 0
0
Enter a number: 0
Thanks for playing.. goodbye
And I think your code may should be:
a = int(input('Enter a number: '))
total =0
keep = True
while keep:
if a ==0:
print('Thanks for playing.. goodbye')
break;
else:
while a>99 or a <0:
print('INVALID')
a = int(input('Enter a number: '))
if a ==0:
print('Thanks for playing.. goodbye')
break;
total = total + a
print(total)
a = int(input('Enter a number: '))
You may procide your requirement more detail.
A: You are in while loop of else conditional since you entered 100 at first and you can't get out of there unless you enter a number satisfies 0 <= a <= 99. You can make another if statement for a == 0 to exit the while loop just below a = int(input('Enter a number')) of else conditional.
I think it is good to check where you are in while loop using just one print(a). For example, just before if-else conditional or just before while of else conditional. Then, you can check where it gets wrong.
a = int(input('Enter a number: '))
total = 0
keep = True
while keep:
\\ print(a) here to make sure if you are passing here or not.
if a == 0:
print('Thanks for playing.. goodbye')
break;
else:
\\ print(a) here to make sure if you are passing here or not.
while a > 99 or a < 0: \\You are in while loop since you entered 100 at first and you can't get out from here unless you enter a: 0 <= a <= 99.
print('INVALID')
a = int(input('Enter a number: '))
if a == 0:
print('Thanks for playing.. goodbye')
break;
total = total + a
print(total)
a = int(input('Enter a number: ')) | unknown | |
d13146 | train | You're calling counter() twice, subtracting 40 each time, call it just once
var start = 400;
var interval = 40;
function counter() {
return start -= interval;
}
var stop = setInterval(function() {
var count = counter();
if (count > 0) {
document.getElementById("test").innerHTML = count;
} else {
clearInterval(stop);
}
}, 1000); | unknown | |
d13147 | train | The best way to check if a document can be updated is to check before it's updated. That way, if any of the assertions you've made before the update are passing, then you'll know it's been successfully updated:
router.post('/update',async(req,res)=>{
try {
// move this statement inside the try block; otherwise if "id" or "addresses"
// is missing from req.body, it'll throw an unhandled destructure error
const { id, addresses} = req.body;
// make sure the id is present
if (!id) throw String("Invalid request. You must supply a user id to update!");
/*
I'm not sure how "addresses" is structured, but if you expect
it to be an object, then make assertions against an object with structure:
if(!object || object.length <= 0 || !object.name || !object.city ...etc)
if it's an array, then make assertions that it's not empty...
*/
if (addresses assertions are invalid)
throw String("Invalid request. The addresses you've provided are not valid! Please try again.");
/*
If the above passes the req.body assertions, then the only thing
that can fail here is if "id" is invalid. If it's invalid, then Mongoose
will throw an error if "updateOne" doesn't find the document;
otherwise, you'll know the document has been successfully updated.
Optionally, you can manually handle "id" errors by finding and throwing
an error if the document is empty BEFORE attempting to update,
for example...
const existingUser = await UserModel.findOne({ _id: id });
if (!existingUser) throw String("Unable to locate that user to update.");
await existingUser.update(addresses);
await existingUser.save();
*/
await UserModel.updateOne({ _id: id }, { addresses });
// if the above is successful, then the client should receive this
// message:
res.status(200).send("Successfully updated user!");
} catch (error) {
// if any of the above assertions are unsuccessful, then the
// client should receive an error message:
res.status(400).send(error.toString());
}
});
On a side note, always return a response! Even if it's just res.end(). Failing to do so, will result in a hung request/unhandled promise. | unknown | |
d13148 | train | You have (?<dot>\.(?!\.))+) a named capturing group in your regex. I don't believe that is supported in vscode search across files.
In any case, your original regex froze vscode on my Windows machine, but when I removed the named capturing group, the regex worked fine in BOTH the find in a file widget and searching across files. So this did not freeze vscode for me:
(?:((\s|\w+|\d+|\]|\))+)(?<!(\bReact\b))(\.(?!\.))+)
I suggest you replace the named capturing group with just a simple capturing group. | unknown | |
d13149 | train | yes it is, you can use AJAX :
var dataIn = { someKey : "someValue" };
$.ajax({
url : "https://domain.tld/folder2/editor.php",
data : dataIn,
success : function ( result ) {
// do something with the response
}
});
or $.get or $.post depending on what you need in your backend. | unknown | |
d13150 | train | It's not exactly clear neither from your desired output nor from your code what exactly are you trying to achieve but if it's just counting words in individual sentences then the strategy should be:
*
*Read your common.txt into a set for a fast lookup.
*Read your sample.txt and split on . to get individual sentences.
*Clear up all non-word characters (you'll have to define them or to use regex \b to capture word boundaries) and replace them with whitespace.
*Split on whitespace and count the words not present in the set from Step 1.
So:
import collections
with open("common.txt", "r") as f: # open the `common.txt` for reading
common_words = {l.strip().lower() for l in f} # read each line and and add it to a set
interpunction = ";,'\"" # define word separating characters and create a translation table
trans_table = str.maketrans(interpunction, " " * len(interpunction))
sentences_counter = [] # a list to hold a word count for each sentence
with open("sample.txt", "r") as f: # open the `sample.txt` for reading
# read the whole file to include linebreaks and split on `.` to get individual sentences
sentences = [s for s in f.read().split(".") if s.strip()] # ignore empty sentences
for sentence in sentences: # iterate over each sentence
sentence = sentence.translate(trans_table) # replace the interpunction with spaces
word_counter = collections.defaultdict(int) # a string:int default dict for counting
for word in sentence.split(): # split the sentence and iterate over the words
if word.lower() not in common_words: # count only words not in the common.txt
word_counter[word.lower()] += 1
sentences_counter.append(word_counter) # add the current sentence word count
NOTE: On Python 2.x use string.maketrans() instead of str.maketrans().
This will produce sentences_counter containing a dictionary count for each of the sentences in sample.txt, where the key is an actual word and its associate value is the word count. You can print the result as:
for i, v in enumerate(sentences_counter):
print("Sentence #{}:".format(i+1))
print("\n".join("\t{}: {}".format(w, c) for w, c in v.items()))
Which will produce (for your sample data):
Sentence #1:
area: 1
drainage-basin: 1
great: 1
combined: 1
areas: 1
england: 1
wales: 1
wide: 1
region: 1
fertile: 1
Sentence #2:
mississippi: 1
valley: 1
proper: 1
exceptionally: 1
Keep in mind that (English) language is more complex than this - for example, "A cat wiggles its tail when it's angry so stay away from it." will vastly differ in dependence on how you treat an apostrophe. Also, a dot doesn't necessarily denote the end of a sentence. You should look into NLP if you want to do serious linguistic analysis.
UPDATE: While i don't see the usefulness of reiterating over each word repeating the data (the count won't ever change within a sentence) if you want to print each word and nest all the other counts underneath you can just add an inner loop while printing:
for i, v in enumerate(sentences_counter):
print("Sentence #{}:".format(i+1))
for word, count in v.items():
print("\t{} {}".format(word, count))
print("\n".join("\t\t{}: {}".format(w, c) for w, c in v.items() if w != word))
Which will give you:
Sentence #1:
area 1
drainage-basin: 1
great: 1
combined: 1
areas: 1
england: 1
wales: 1
wide: 1
region: 1
fertile: 1
drainage-basin 1
area: 1
great: 1
combined: 1
areas: 1
england: 1
wales: 1
wide: 1
region: 1
fertile: 1
great 1
area: 1
drainage-basin: 1
combined: 1
areas: 1
england: 1
wales: 1
wide: 1
region: 1
fertile: 1
combined 1
area: 1
drainage-basin: 1
great: 1
areas: 1
england: 1
wales: 1
wide: 1
region: 1
fertile: 1
areas 1
area: 1
drainage-basin: 1
great: 1
combined: 1
england: 1
wales: 1
wide: 1
region: 1
fertile: 1
england 1
area: 1
drainage-basin: 1
great: 1
combined: 1
areas: 1
wales: 1
wide: 1
region: 1
fertile: 1
wales 1
area: 1
drainage-basin: 1
great: 1
combined: 1
areas: 1
england: 1
wide: 1
region: 1
fertile: 1
wide 1
area: 1
drainage-basin: 1
great: 1
combined: 1
areas: 1
england: 1
wales: 1
region: 1
fertile: 1
region 1
area: 1
drainage-basin: 1
great: 1
combined: 1
areas: 1
england: 1
wales: 1
wide: 1
fertile: 1
fertile 1
area: 1
drainage-basin: 1
great: 1
combined: 1
areas: 1
england: 1
wales: 1
wide: 1
region: 1
Sentence #2:
mississippi 1
valley: 1
proper: 1
exceptionally: 1
valley 1
mississippi: 1
proper: 1
exceptionally: 1
proper 1
mississippi: 1
valley: 1
exceptionally: 1
exceptionally 1
mississippi: 1
valley: 1
proper: 1
Feel free to remove the sentence number printing and reduce one of the tab indentations to get something more of a desired output from your question. You can also build a tree-like dictionary instead of printing everything to the STDOUT if that's more what you fancy.
UPDATE 2: If you want, you don't have to use a set for the common_words. In this case it's pretty much interchangeable with a list so you can use list comprehension instead of set comprehension (i.e. replace curly with square brackets) but looking through a list is an O(n) operation whereas set lookup is an O(1) operation and therefore a set is preferred here. Not to mention the collateral benefit of automatic de-duplication in case the common.txt has duplicate words.
As for the collections.defaultdict() it's there just to save us some coding/checking by automatically initializing the dictionary to a key whenever it's requested - without it you'd have to do it manually:
with open("common.txt", "r") as f: # open the `common.txt` for reading
common_words = {l.strip().lower() for l in f} # read each line and and add it to a set
interpunction = ";,'\"" # define word separating characters and create a translation table
trans_table = str.maketrans(interpunction, " " * len(interpunction))
sentences_counter = [] # a list to hold a word count for each sentence
with open("sample.txt", "r") as f: # open the `sample.txt` for reading
# read the whole file to include linebreaks and split on `.` to get individual sentences
sentences = [s for s in f.read().split(".") if s.strip()] # ignore empty sentences
for sentence in sentences: # iterate over each sentence
sentence = sentence.translate(trans_table) # replace the interpunction with spaces
word_counter = {} # initialize a word counting dictionary
for word in sentence.split(): # split the sentence and iterate over the words
word = word.lower() # turn the word to lowercase
if word not in common_words: # count only words not in the common.txt
word_counter[word] = word_counter.get(word, 0) + 1 # increase the last count
sentences_counter.append(word_counter) # add the current sentence word count
UPDATE 3: If you just want a raw word list across all sentences as it seems from your last update to the question, you don't even need to consider the sentences themselves - just add a dot to the interpunction list, read the file line by line, split on whitespace and count the words as before:
import collections
with open("common.txt", "r") as f: # open the `common.txt` for reading
common_words = {l.strip().lower() for l in f} # read each line and and add it to a set
interpunction = ";,'\"." # define word separating characters and create a translation table
trans_table = str.maketrans(interpunction, " " * len(interpunction))
sentences_counter = [] # a list to hold a word count for each sentence
word_counter = collections.defaultdict(int) # a string:int default dict for counting
with open("sample.txt", "r") as f: # open the `sample.txt` for reading
for line in f: # read the file line by line
for word in line.translate(trans_table).split(): # remove interpunction and split
if word.lower() not in common_words: # count only words not in the common.txt
word_counter[word.lower()] += 1 # increase the count
print("\n".join("{}: {}".format(w, c) for w, c in word_counter.items())) # print the counts | unknown | |
d13151 | train | A Jquery function allows you to change an attribute of a tag. It is written as $("tag").attr('attr','newvalue').
In your case, you can do $("a").attr('href','posts.php?post=44') inside the click function (you should probably give the a tag an id too)
A: "on-click" handler has $(this) that you can use to navigate in the DOM tree if you know that the object you want to alter is relatively placed (from what I see you have <a/> following <button/>.
$('.button').on('click', function() {
$(this).parent().find('.post-link').attr('href', '#hello-world')
})
JSFiddle
If you want to change a link of the element that is somewhere on the page, I would suggest searching by id value (the id should be unique). Let's say you have <a id="my-id" href="...">...</a>, then:
$('.button').on('click', function() {
$('#my-id').attr('href', '#hello-world')
}) | unknown | |
d13152 | train | @user663049: I think your problem is this line --
$query = "SELECT item_id, username, item_content FROM updates ORDER BY update_time DESC LIMIT $start,$number_of_posts" or die(mysql_error());
It should be --
$query = "SELECT item_id, username, item_content FROM updates ORDER BY update_time DESC LIMIT " . $start . ", " . $number_of_posts;
$result = mysql_query($query) or die(mysql_error());
A: If it displays nothing maybe because of web server configuration. Could you just try to insert at the begining of the PHP block the following, so you may see at least errors and make the debugging
<?php
error_reporting(E_ALL);
ini_set('display_errors', true);
?> | unknown | |
d13153 | train | Your Problem has nothing to do with the code you posted. Please provide us with additional information. Also consider to set the shell of the currently active widget as parent shell in the MessageBox constructor (e.g. new MessageBox(swtControl.getShell(), SWT.OK). Otherwise the dialog might not be modal. This depends on the modal style of the Shell.
A: After research i found that you need to dispose component that you need no longer once specific action has been completed. So once my MessageDialog appears and user clicks
OK... i need to dispose my MessageDialog using Display.getCurrent().dispose() | unknown | |
d13154 | train | Why is that surprising? You are mixing arguments and standard input, which are fundamentally completely different.
However, it's not hard to accommodate this requirement.
case $# in
3) text="$3" ;;
2) text=$(cat) ;;
esac
: .... do stuff with "$text"
Your script has slightly sloppy indentation and quoting, so here is a somewhat refactored version.
key="" #your maildrill API key
from_email="" #who is sending the email
from_name="curl sender" #from name
case $# in
3) text="$3";;
2) text="$(cat)";;
*) echo "$0: oops! Need 2 or 3 arguments -- aborting" >&2; exit 1 ;;
esac
msg='{ "async": false, "key": "'"$key"'", "message": { "from_email": "'"$from_email"'", "from_name": "'"$from_name"'", "return_path_domain": null, "subject": "'"$2"'", "text": "'"$text"'", "to": [ { "email": "'"$1"'", "type": "to" } ] } }'
result=$(curl -A 'Mandrill-Curl/1.0' -d "$msg" 'https://mandrillapp.com/api/1.0/messages/send.json' -s 2>&1)
case $results in
*"sent"*) exit 0;;
*) echo "$0: error: $results" >&2; exit 2;;
esac
Notice in particular how any user-supplied string absolutely must be within double quotes.
(I took out Reply-To: because it's completely superfluous when it's equal to the From: header.) | unknown | |
d13155 | train | You could use RelativeLayout in the MainPage. Then you add the Button and the CameraPreview into it. For example:
<?xml version="1.0" encoding="UTF-8"?>
<ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
xmlns:local="clr-namespace:CustomRenderer;assembly=CustomRenderer"
x:Class="CustomRenderer.MainPage"
Padding="0,20,0,0"
Title="Main Page">
<ContentPage.Content>
<RelativeLayout>
<local:CameraPreview
Camera="Rear" x:Name="Camera"
RelativeLayout.WidthConstraint="{ConstraintExpression Type=RelativeToParent,Property=Width,Factor=1,Constant=0}"
RelativeLayout.HeightConstraint="{ConstraintExpression Type=RelativeToParent,Property=Height,Factor=1,Constant=0}"/>
<Button x:Name="button"
Text="Button" Clicked="button_Clicked"
RelativeLayout.YConstraint="{ConstraintExpression Type=RelativeToView,ElementName=Camera,Property=Height,Factor=.85,Constant=0}"
RelativeLayout.XConstraint="{ConstraintExpression Type=RelativeToView,ElementName=Camera,Property=Width,Factor=.05,Constant=0}"
RelativeLayout.WidthConstraint="{ConstraintExpression Type=RelativeToParent,Property=Width,Factor=.9,Constant=0}"
RelativeLayout.HeightConstraint="{ConstraintExpression Type=RelativeToParent,Property=Height,Factor=.125,Constant=0}" />
</RelativeLayout>
</ContentPage.Content>
</ContentPage>
And you could use the MessagingCenter to fired the click event of CameraPreview.
For example, in the renderer:
protected override void OnElementChanged(ElementChangedEventArgs<CustomRenderer.CameraPreview> e)
{
base.OnElementChanged(e);
if (Control == null)
{
cameraPreview = new CameraPreview(Context);
SetNativeControl(cameraPreview);
MessagingCenter.Subscribe<MainPage>(this, "ButtonClick", (sender) => {
if (cameraPreview.IsPreviewing)
{
cameraPreview.Preview.StopPreview();
cameraPreview.IsPreviewing = false;
}
else
{
cameraPreview.Preview.StartPreview();
cameraPreview.IsPreviewing = true;
}
});
}
if (e.OldElement != null)
{
// Unsubscribe
cameraPreview.Click -= OnCameraPreviewClicked;
}
if (e.NewElement != null)
{
Control.Preview = Camera.Open((int)e.NewElement.Camera);
// Subscribe
cameraPreview.Click += OnCameraPreviewClicked;
}
}
And in the Button click event:
private void button_Clicked(object sender, System.EventArgs e)
{
MessagingCenter.Send<MainPage>(this, "ButtonClick");
}
A: This worked for me.
In the CameraPreview class just add a new View member (cameraLayoutView) and a button:
public sealed class CameraPreview : ViewGroup, ISurfaceHolderCallback
{
View cameraLayoutView;
global::Android.Widget.Button takePhotoButton;
}
Then in the constructor instantiate this view and button:
public CameraPreview (Context context)
: base (context)
{
surfaceView = new SurfaceView (context);
AddView (surfaceView);
Activity activity = this.Context as Activity;
cameraLayoutView = activity.LayoutInflater.Inflate(Resource.Layout.CameraLayout, this, false);
AddView(cameraLayoutView);
takePhotoButton = this.FindViewById<global::Android.Widget.Button>(Resource.Id.takePhotoButton);
takePhotoButton.Click += TakePhotoButtonTapped;
windowManager = Context.GetSystemService (Context.WindowService).JavaCast<IWindowManager> ();
IsPreviewing = false;
holder = surfaceView.Holder;
holder.AddCallback (this);
}
async void TakePhotoButtonTapped(object sender, EventArgs e)
{
//handle the click here
}
protected override void OnLayout (bool changed, int l, int t, int r, int b)
{
var msw = MeasureSpec.MakeMeasureSpec (r - l, MeasureSpecMode.Exactly);
var msh = MeasureSpec.MakeMeasureSpec (b - t, MeasureSpecMode.Exactly);
surfaceView.Measure (msw, msh);
surfaceView.Layout (0, 0, r - l, b - t);
cameraLayoutView.Measure(msw, msh);
cameraLayoutView.Layout(0, 0, r - l, b - t);
}
Also add this layout in Resources/Layout:
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:layout_weight="1">
<TextureView
android:id="@+id/textureView"
android:layout_marginTop="-95dp"
android:layout_width="wrap_content"
android:layout_height="wrap_content" />
<Button
android:id="@+id/toggleFlashButton"
android:layout_width="37dp"
android:layout_height="37dp"
android:layout_gravity="top|left"
android:layout_marginLeft="25dp"
android:layout_marginTop="25dp"
android:background="@drawable/NoFlashButton" />
<Button
android:id="@+id/switchCameraButton"
android:layout_width="35dp"
android:layout_height="26dp"
android:layout_gravity="top|right"
android:layout_marginRight="25dp"
android:layout_marginTop="25dp"
android:background="@drawable/ToggleCameraButton" />
<Button
android:id="@+id/takePhotoButton"
android:layout_width="65dp"
android:layout_height="65dp"
android:layout_marginBottom="15dp"
android:layout_gravity="center|bottom"
android:background="@drawable/TakePhotoButton" />
</FrameLayout>
And the button icons in Resources/drawable. | unknown | |
d13156 | train | A lambda function instance has a maximum lifetime of about 2 hours even if it's in use. If you want to keep an instance alive then you should use provisioned concurrency.
A: What Matt said above is true, you should use provisioned concurrency. But you can also speed up the cold start by using lambda layers to include files there, instead of downloading on initialization (if the files you need to download don't change frequently). | unknown | |
d13157 | train | Reverting does not delete commits, it creates a new commit that is the inverse of the patch set. So if you're cherry picking previous commits into a new branch, the old ones won't be identified as needing to be merged.
You could attempt to rebase those same commits into a new branch, then merge those back in. The rebasing would replay and create new commits for your reverted items. Since you don't provide a lot of details, I can't give an exact answer. | unknown | |
d13158 | train | We had the similar issue using Boot (create a multi-servlets app with parent context) and we solved it in the following way:
1.Create your parent Spring config, which will consist all parent's beans which you want to share. Something like this:
@EnableAutoConfiguration(
exclude = {
//use this section if your want to exclude some autoconfigs (from Boot) for example MongoDB if you already have your own
}
)
@Import(ParentConfig.class)//You can use here many clasess from you parent context
@PropertySource({"classpath:/properties/application.properties"})
@EnableDiscoveryClient
public class BootConfiguration {
}
2.Create type which will determine the type of your specific app module (for example ou case is REST or SOAP). Also here you can specify your required context path or another app specific data (I will show bellow how it will be used):
public final class AppModule {
private AppType type;
private String name;
private String contextPath;
private String rootPath;
private Class<?> configurationClass;
public AppModule() {
}
public AppModule(AppType type, String name, String contextPath, Class<?> configurationClass) {
this.type = type;
this.name = name;
this.contextPath = contextPath;
this.configurationClass = configurationClass;
}
public AppType getType() {
return type;
}
public void setType(AppType type) {
this.type = type;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getRootPath() {
return rootPath;
}
public AppModule withRootPath(String rootPath) {
this.rootPath = rootPath;
return this;
}
public String getContextPath() {
return contextPath;
}
public void setContextPath(String contextPath) {
this.contextPath = contextPath;
}
public Class<?> getConfigurationClass() {
return configurationClass;
}
public void setConfigurationClass(Class<?> configurationClass) {
this.configurationClass = configurationClass;
}
public enum AppType {
REST,
SOAP
}
}
3.Create Boot app initializer for your whole app:
public class BootAppContextInitializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
private List<AppModule> modules = new ArrayList<>();
BootAppContextInitializer(List<AppModule> modules) {
this.modules = modules;
}
@Override
public void initialize(ConfigurableApplicationContext ctx) {
for (ServletRegistrationBean bean : servletRegs(ctx)) {
ctx.getBeanFactory()
.registerSingleton(bean.getServletName() + "Bean", bean);
}
}
private List<ServletRegistrationBean> servletRegs(ApplicationContext parentContext) {
List<ServletRegistrationBean> beans = new ArrayList<>();
for (AppModule module: modules) {
ServletRegistrationBean regBean;
switch (module.getType()) {
case REST:
regBean = createRestServlet(parentContext, module);
break;
case SOAP:
regBean = createSoapServlet(parentContext, module);
break;
default:
throw new RuntimeException("Not supported AppType");
}
beans.add(regBean);
}
return beans;
}
private ServletRegistrationBean createRestServlet(ApplicationContext parentContext, AppModule module) {
WebApplicationContext ctx = createChildContext(parentContext, module.getName(), module.getConfigurationClass());
//Create and init MessageDispatcherServlet for REST
//Also here you can init app specific data from AppModule, for example,
//you can specify context path in the follwing way
//servletRegistrationBean.addUrlMappings(module.getContextPath() + module.getRootPath());
}
private ServletRegistrationBean createSoapServlet(ApplicationContext parentContext, AppModule module) {
WebApplicationContext ctx = createChildContext(parentContext, module.getName(), module.getConfigurationClass());
//Create and init MessageDispatcherServlet for SOAP
//Also here you can init app specific data from AppModule, for example,
//you can specify context path in the follwing way
//servletRegistrationBean.addUrlMappings(module.getContextPath() + module.getRootPath());
}
private WebApplicationContext createChildContext(ApplicationContext parentContext, String name,
Class<?> configuration) {
AnnotationConfigEmbeddedWebApplicationContext ctx = new AnnotationConfigEmbeddedWebApplicationContext();
ctx.setDisplayName(name + "Context");
ctx.setParent(parentContext);
ctx.register(configuration);
Properties source = new Properties();
source.setProperty("APP_SERVLET_NAME", name);
PropertiesPropertySource ps = new PropertiesPropertySource("MC_ENV_PROPS", source);
ctx.getEnvironment()
.getPropertySources()
.addLast(ps);
return ctx;
}
}
4.Create abstract config classes which will contain child-specific beans and everything that you can not or don't want share via parent context. Here you can specify all required interfaces such as WebSecurityConfigurer or EmbeddedServletContainerCustomizer for your particular app module:
/*Example for REST app*/
@EnableWebMvc
@ComponentScan(basePackages = {
"com.company.package1",
"com.company.web.rest"})
@Import(SomeCommonButChildSpecificConfiguration.class)
public abstract class RestAppConfiguration extends WebMvcConfigurationSupport {
//Some custom logic for your all REST apps
@Autowired
private LogRawRequestInterceptor logRawRequestInterceptor;
@Autowired
private LogInterceptor logInterceptor;
@Autowired
private ErrorRegister errorRegister;
@Autowired
private Sender sender;
@PostConstruct
public void setup() {
errorRegister.setSender(sender);
}
@Override
public void addInterceptors(InterceptorRegistry registry) {
registry.addInterceptor(logRawRequestInterceptor);
registry.addInterceptor(scopeInterceptor);
}
@Override
public void setServletContext(ServletContext servletContext) {
super.setServletContext(servletContext);
}
}
/*Example for SOAP app*/
@EnableWs
@ComponentScan(basePackages = {"com.company.web.soap"})
@Import(SomeCommonButChildSpecificConfiguration.class)
public abstract class SoapAppConfiguration implements ApplicationContextAware {
//Some custom logic for your all SOAP apps
private boolean logGateWay = false;
protected ApplicationContext applicationContext;
@Autowired
private Sender sender;
@Autowired
private ErrorRegister errorRegister;
@Autowired
protected WsActivityIdInterceptor activityIdInterceptor;
@Autowired
protected WsAuthenticationInterceptor authenticationInterceptor;
@PostConstruct
public void setup() {
errorRegister.setSender(sender);
}
@Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
this.applicationContext = applicationContext;
}
/**
* Setup preconditions e.g. interceptor deactivation
*/
protected void setupPrecondition() {
}
public boolean isLogGateWay() {
return logGateWay;
}
public void setLogGateWay(boolean logGateWay) {
this.logGateWay = logGateWay;
}
public abstract Wsdl11Definition defaultWsdl11Definition();
}
5.Create entry point class which will compile whole our app:
public final class Entrypoint {
public static void start(String applicationName, String[] args, AppModule... modules) {
System.setProperty("spring.application.name", applicationName);
build(new SpringApplicationBuilder(), modules).run(args);
}
private static SpringApplicationBuilder build(SpringApplicationBuilder builder, AppModule[] modules) {
return builder
.initializers(
new LoggingContextInitializer(),
new BootAppContextInitializer(Arrays.asList(modules))
)
.sources(BootConfiguration.class)
.web(true)
.bannerMode(Banner.Mode.OFF)
.logStartupInfo(true);
}
}
Now everything is ready to rocket our super multi-app boot in two steps:
1.Init your child apps, for example, REST and SOAP:
//REST module
@ComponentScan(basePackages = {"com.module1.package.*"})
public class Module1Config extends RestAppConfiguration {
//here you can specify all your child's Beans and etc
}
//SOAP module
@ComponentScan(
basePackages = {"com.module2.package.*"})
public class Module2Configuration extends SoapAppConfiguration {
@Override
@Bean(name = "service")
public Wsdl11Definition defaultWsdl11Definition() {
ClassPathResource wsdlRes = new ClassPathResource("wsdl/Your_WSDL.wsdl");
return new SimpleWsdl11Definition(wsdlRes);
}
@Override
protected void setupPrecondition() {
super.setupPrecondition();
setLogGateWay(true);
activityIdInterceptor.setEnabled(true);
}
}
2.Prepare entry point and run as Boot app:
public class App {
public static void main(String[] args) throws Exception {
Entrypoint.start("module1",args,
new AppModule(AppModule.AppType.REST, "module1", "/module1/*", Module1Configuration.class),
new AppModule(AppModule.AppType.SOAP, "module2", "module2", Module2Configuration.class)
);
}
}
enjoy ^_^
Useful links:
*
*https://dzone.com/articles/what-servlet-container
*Spring: Why "root" application context and "servlet" application context are created by different parties?
*Role/Purpose of ContextLoaderListener in Spring?
A: This could be one way of doing this (it's in our production code). We point to XML config, so maybe instead of dispatcherServlet.setContextConfigLocation() you could use dispatcherServlet.setContextClass()
@Configuration
public class JettyConfiguration {
@Autowired
private ApplicationContext applicationContext;
@Bean
public ServletHolder dispatcherServlet() {
AnnotationConfigWebApplicationContext ctx = new AnnotationConfigWebApplicationContext();
ctx.register(MvcConfiguration.class);//CUSTOM MVC @Configuration
DispatcherServlet servlet = new DispatcherServlet(ctx);
ServletHolder holder = new ServletHolder("dispatcher-servlet", servlet);
holder.setInitOrder(1);
return holder;
}
@Bean
public ServletContextHandler servletContext() throws IOException {
ServletContextHandler handler =
new ServletContextHandler(ServletContextHandler.SESSIONS);
AnnotationConfigWebApplicationContext rootWebApplicationContext =
new AnnotationConfigWebApplicationContext();
rootWebApplicationContext.setParent(applicationContext);
rootWebApplicationContext.refresh();
rootWebApplicationContext.getEnvironment().setActiveProfiles(applicationContext.getEnvironment().getActiveProfiles());
handler.setAttribute(
WebApplicationContext.ROOT_WEB_APPLICATION_CONTEXT_ATTRIBUTE,
rootWebApplicationContext);
handler.setContextPath("/my-root");
handler.setResourceBase(new ClassPathResource("webapp").getURI().toString());
handler.addServlet(AdminServlet.class, "/metrics/*");//DROPWIZARD
handler.addServlet(dispatcherServlet(), "/");
/*Web context 1*/
DispatcherServlet webMvcDispatcherServlet1 = new DispatcherServlet();
webMvcDispatcherServlet1.setContextConfigLocation("classpath*:/META-INF/spring/webmvc-config1.xml");
webMvcDispatcherServlet1.setDetectAllHandlerAdapters(true);
webMvcDispatcherServlet1.setDetectAllHandlerMappings(true);
webMvcDispatcherServlet1.setDetectAllViewResolvers(true);
webMvcDispatcherServlet1.setEnvironment(applicationContext.getEnvironment());
handler.addServlet(new ServletHolder("webMvcDispatcherServlet1",webMvcDispatcherServlet1), "/web1/*");
/*Web context 2*/
DispatcherServlet webMvcDispatcherServlet2 = new DispatcherServlet();
webMvcDispatcherServlet2.setContextConfigLocation("classpath*:/META-INF/spring/web-yp-config.xml");
webMvcDispatcherServlet2.setDetectAllHandlerAdapters(true);
webMvcDispatcherServlet2.setDetectAllHandlerMappings(true);
webMvcDispatcherServlet2.setDetectAllViewResolvers(false);
webMvcDispatcherServlet2.setEnvironment(applicationContext.getEnvironment());
handler.addServlet(new ServletHolder("webMvcDispatcherServlet2",webMvcDispatcherServlet2), "/web2/*");
/* Web Serices context 1 */
MessageDispatcherServlet wsDispatcherServlet1 = new MessageDispatcherServlet();
wsDispatcherServlet1.setContextConfigLocation("classpath*:/META-INF/spring/ws-config1.xml");
wsDispatcherServlet1.setEnvironment(applicationContext.getEnvironment());
handler.addServlet(new ServletHolder("wsDispatcherServlet1", wsDispatcherServlet1), "/ws1/*");
/* Web Serices context 2 */
MessageDispatcherServlet wsDispatcherServlet2 = new MessageDispatcherServlet();
wsDispatcherServlet2.setContextConfigLocation("classpath*:/META-INF/spring/ws-siteconnect-config.xml");
wsDispatcherServlet2.setEnvironment(applicationContext.getEnvironment());
handler.addServlet(new ServletHolder("wsDispatcherServlet2", wsDispatcherServlet2), "/ws2/*");
/*Spring Security filter*/
handler.addFilter(new FilterHolder(
new DelegatingFilterProxy("springSecurityFilterChain")), "/*",
null);
return handler;
}
@Bean
public CharacterEncodingFilter characterEncodingFilter() {
CharacterEncodingFilter bean = new CharacterEncodingFilter();
bean.setEncoding("UTF-8");
bean.setForceEncoding(true);
return bean;
}
@Bean
public HiddenHttpMethodFilter hiddenHttpMethodFilter() {
HiddenHttpMethodFilter filter = new HiddenHttpMethodFilter();
return filter;
}
/**
* Jetty Server bean.
* <p/>
* Instantiate the Jetty server.
*/
@Bean(initMethod = "start", destroyMethod = "stop")
public Server jettyServer() throws IOException {
/* Create the server. */
Server server = new Server();
/* Create a basic connector. */
ServerConnector httpConnector = new ServerConnector(server);
httpConnector.setPort(9083);
server.addConnector(httpConnector);
server.setHandler(servletContext());
return server;
}
}
A: Unfortunately I couldn't find a way to use auto configuration for multiple servlets.
However, you can use the ServletRegistrationBean to register multiple servlets for your application. I would recommend you to use the AnnotationConfigWebApplicationContext to initiate the context because this way you can use the default Spring configuration tools (not the spring boot one) to configure your servlets. With this type of context you just have to register a configuration class.
@Bean
public ServletRegistrationBean servletRegistration() {
AnnotationConfigWebApplicationContext context = new AnnotationConfigWebApplicationContext();
context.register(YourConfig.class);
DispatcherServlet servlet = new DispatcherServlet();
servlet.setApplicationContext(context);
ServletRegistrationBean registration = new ServletRegistrationBean(servlet, "/servletX");
registration.setLoadOnStartup(1);
registration.setName("servlet-X");
return registration;
}
If you want to handle multipart requests you should set the multipart configuration for the registration bean. This configuration can be autowired for the registration and will be resolved from the parent context.
public ServletRegistrationBean servletRegistration(MultipartConfigElement mutlipart) ...
registration.setMultipartConfig(mutlipartConfig);
I've created a little github example project which you can reach here.
Note that I set up the servlet configs by Java package but you can define custom annotations for this purpose too.
A: I manage to create an independant jar that makes tracking on my webapp and it is started depending on the value of a property in a spring.factories file in resources/META-INF in the main app:
org.springframework.boot.autoconfigure.EnableAutoConfiguration=my package.tracking.TrackerConfig
Maybe, you could try to have independant war, started with this mechanism and then inject values in the properties files with maven mechanism/plugin (Just a theory, never tried, but based on several projects I worked on) | unknown | |
d13159 | train | There is no answer to that question: your code exhibits undefined behavior. It could print "the right value" as you are seeing, it could print anything else, it could segfault, it could order pizza online with your credit card.
Dereferencing that pointer in main is illegal, it doesn't point to valid memory at that point. Don't do it.
There's a big difference between you two examples: in the first case, *pointer is evaluated before calling printf. So, given that there are no function calls between the line where you get the pointer value, and the printf, chances are high that the stack location pointer points to will not have been overwritten. So the value that was stored there prior to calling printf is likely to be output (that value will be passed on to printf's stack, not the pointer).
In the second case, you're passing a pointer to the stack to printf. The call to printf overwrites (a part of) that same stack region the pointer is pointing to, and printf ends up trying to print its own stack (more or less) which doesn't have a high chance of containing something readable.
Note that you can't rely on getting gibberish either. Your implementation is free to use a different stack for the printf call if it feels like it, as long as it follows the requirements laid out by the standard.
A: This is undefined behavior, and it could have launched a missile instead. But it just happened to give you the correct answer.
Think about it, it kind of make sense -- what else did you expect? Should it have given you zero? If so, then the compiler must insert special instructions at the scope end to erase the variable's content -- waste of resources. The most natural thing for the compiler to do is to leave the contents unchanged -- so you just got the correct output from undefined behavior by chance.
You could say this behavior is implementation defined. For example. Another compiler (or the same compiler in "Release" mode) may decide to allocate myInteger purely in register (not sure if it actually can do this when you take an address of it, but for the sake of argument...), in that case no memory would be allocated for 99 and you would get garbage output.
As a more illustrative (but totally untested) example -- if you insert some malloc and exercise some memory usage before printf you may find the garbage value you were looking for :P
Answer to "Edited" part
The "real" answer that you want needs to be answered in disassembly. A good place to start is gcc -S and gcc -O3 -S. I will leave the in-depth analysis for wizards that will come around. But I did a cursory peek using GCC and it turns out that printf("%s\n") gets translated to puts, so the calling convention is different. Since local variables are allocated on the stack, calling a function could "destroy" previously allocated local variables.
A: *
*Destroying is the wrong word imho. Locals reside on the stack, if the function returns the stack space may be reused again. Until then it is not overwritten and still accessible by pointers which you might not really want (because this might never point to something valid)
*Pointers are used to address space in memory, for local pointers the same as I described in 1 is valid. However the pointer seems to be passed to the main program.
*If it really is the address storing the former integer it will result in "99" up until that point in the execution of your program when the program overwrite this memory location. It may also be another 99 by coincidence. Any way: do not do this.
These kind of errors will lead to trouble some day, may be on other machines, other OS, other compiler or compiler options - imagine you upgrade your compiler which may change the behaviour the memory usage or even a build with optimization flags, e.g. release builds vs default debug builds, you name it.
A: In most C/C++ programs their local variables live on the stack, and destroyed means overwritten with something else. In this case that particular location had not been overwritten yet when it was passed as a parameter to printf().
Of course, having such code is asking for trouble because per the C and C++ standards it exhibits undefined behavior.
A: That is undefined behavior. That means that anything can happen, even what you would expect.
The tricky part of UB is when it gives you the result you expect, and so you think that you are doing it right. Then, any change in an unrelated part of the program changes that...
Answering your question more specifically, you are returning a pointer to an automatic variable, that no longer exists when the function returns, but since you call no other functions in the middle, it happens to keep the old value.
If you call, for example printf twice, the second time it will most likely print a different value.
A: The key idea is that a variable represents a name and type for value stored somewhere in memory. When it is "destroyed", it means that a) that value can no longer be accessed using that name, and b) the memory location is free to be overwritten.
The behavior is undefined because the implementation of the compiler is free to choose what time after "destruction" the location is actually overwritten. | unknown | |
d13160 | train | As soon as you are using the same DbContext instance, the updated data will not be seen in your entities.
You can check this blog post for details:
It turns out that Entity Framework uses the Identity Map pattern. This means that once an entity with a given key is loaded in the context’s cache, it is never loaded again for as long as that context exists. So when we hit the database a second time to get the customers, it retrieved the updated 851 record from the database, but because customer 851 was already loaded in the context, it ignored the newer record from the database
http://codethug.com/2016/02/19/Entity-Framework-Cache-Busting/
I could successfully reproduce this behavior.
For your case you could as an option create a new instance of context on each iteration
// Main loop
while (!stoppingToken.IsCancellationRequested)
{
using (var scope = _scopeFactory.CreateScope())
{
db = scope.ServiceProvider.GetRequiredService<MyDbContext>();
// Get list of workers
foreach (var worker in db.Workers
.Where(w => w.Status != WorkerStatus.Updating)
.ToList())
{
var prevStatus = worker.Status;
// Poll for new status
string currentStatus = await worker.PollAsync(stoppingToken);
if (currentStatus != null)
{
worker.Status = currentStatus;
}
if (worker.Status != prevStatus)
{
await Log(worker, $"Status changed to {worker.Status}", stoppingToken);
}
await db.SaveChangesAsync(stoppingToken);
}
}
} | unknown | |
d13161 | train | As Theo notes, consider using a dedicated INI file parser, such as the one provided by the third-party PsIni module - see this answer for how to install and use it.
*
*Update: Your own answer now shows how to use it to solve your specific problem.
If installing a module isn't an option, I suggest using a -switch statement, which has awk-like capabilities:
# Array of sample lines, as would be returned by:
# Get-Content $inifile
$iniFileLines = @'
[Datoformat]
Separator=.
[Eksport]
Automatisk=Nei
[Paaminnelse]
Automatisk=Nei
AutomatiskService=Nei
Tidspunkt=
[Printer]
AutomatiskEpost=Nei
AntallDagerFrem=Nei
AntallDagerFremAntall=2
'@ -split '\r?\n'
$inPaaminnelseSect = $false
# Note: To operate directly on your file, replace
# ($iniFileLines) with -File $iniFile
switch -Regex ($iniFileLines) {
'^(Tidspunkt)=$' { '{0}=10:00' -f $Matches[1]; continue }
'^(AntallDagerFrem)=Nei' { '{0}=Ja' -f $Matches[1]; continue }
'^(Automatisk)=Nei' {
if ($inPaaminnelseSect) { '{0}=Ja' -f $Matches[1] } else { $_ }; continue
}
'^\[(.+)\]' { $inPaaminnelseSect = $Matches[1] -eq 'Paaminnelse'; $_ }
default { $_ } # pass through
}
Note:
*
*The above assumes:
*
*that your file has exactly the format as shown in your question. However, it would be easy to make parsing more flexible with respect to optional whitespace.
*that you only want to replace entries if they have a specific current value (or none); this too could easily be changed to replace the values irrespective of the current value.
*Matching is case-insensitive, as PowerShell generally is by default; add the -CaseSensitive switch to make it case-sensitive.
Output:
[Datoformat]
Separator=.
[Eksport]
Automatisk=Nei
[Paaminnelse]
Automatisk=Ja
AutomatiskService=Nei
Tidspunkt=10:00
[Printer]
AutomatiskEpost=Nei
AntallDagerFrem=Ja
AntallDagerFremAntall=2
A: Theo and mklement0 have both pointed to the PsIni-module which does exactly what I wanted to do - so consider the question answered.
Set-IniContent $inifile -Sections 'Paaminnelse' -NameValuePairs @{
Automatisk='Ja'; Tidspunkt='10:00'; AntallDagerFrem='Ja'
} | Out-IniFile $inifile -Force | unknown | |
d13162 | train | You should ALWAYS use parametrized queries - this prevents SQL injection attacks, is better for performance, and avoids unnecessary conversions of data to strings just to insert it into the database.
Try somethnig like this:
// define your INSERT query as string *WITH PARAMETERS*
string insertStmt = "INSERT into survey_Request1(sur_no, sur_custname, sur_address, sur_emp, sur_date, sur_time, Sur_status) VALUES(@Surname, @SurCustName, @SurAddress, @SurEmp, @SurDate, @SurTime, @SurStatus)";
// put your connection and command into "using" blocks
using(SqlConnection conn = new SqlConnection("-your-connection-string-here-"))
using(SqlCommand cmd = new SqlCommand(insertStmt, conn))
{
// define parameters and supply values
cmd.Parameters.AddWithValue("@Surname", textBox9.Text.Trim());
cmd.Parameters.AddWithValue("@SurCustName", textBox8.Text.Trim());
cmd.Parameters.AddWithValue("@SurAddress", textBox5.Text.Trim());
cmd.Parameters.AddWithValue("@SurEmp", textBox1.Text.Trim());
cmd.Parameters.AddWithValue("@SurDate", dateTimePicker2.Value.Date);
cmd.Parameters.AddWithValue("@SurTime", dateTimePicker2.Value.Time);
cmd.Parameters.AddWithValue("@SurStatus", "Active");
// open connection, execute query, close connection again
conn.Open();
int rowsAffected = cmd.ExecuteNonQuery();
conn.Close();
}
It would also be advisable to name your textboxes with more expressive names. textbox9 doesn't really tell me which textbox that is - textboxSurname would be MUCH better! | unknown | |
d13163 | train | I think you are looking for this:
select vehicleId
, Project
, month(inspectiondate) month
, year(inspectiondate) year
, datediff(day , min(inspectiondate), case when max(inspectiondate) = min(inspectiondate) then eomonth(min(inspectiondate)) else max(inspectiondate) end) days
from Vehicles
group by vehicleId, Project , month(inspectiondate), year(inspectiondate)
This query in for each month/year for each specific vehicle in a project in that month/year , you get the max and min inspection date and calculate the difference.
db<>fiddle here
A: I am not proud of this solution, but I think it works for you. My approach was to create a table of days and then look at which project the vehicle was assigned to each day. Finally, aggregate by month and year to get the results. I had to do this as a script since you can use aggregate functions in the definitions of recursive CTEs, but you may find a way to do this without needing a recursive CTE.
I created a table variable to import your data so I could write this. Note, I added an extra assignment to test assignments that spanned months.
DECLARE @Vehicles AS TABLE
(
[VehicleID] INT NOT NULL,
[Project] CHAR(2) NOT NULL,
[InspectionDate] DATE NOT NULL
);
INSERT INTO @Vehicles
(
[VehicleID],
[Project],
[InspectionDate]
)
VALUES
(1, 'P1', '2021-08-20'),
(1, 'P1', '2021-09-05'),
(1, 'P2', '2021-09-15'),
(1, 'P3', '2021-09-20'),
(1, 'P2', '2021-10-10'),
(1, 'P1', '2021-10-20'),
(1, 'P3', '2021-10-21'),
(1, 'P2', '2021-10-22'),
(1, 'P4', '2021-11-15'),
(1, 'P4', '2021-11-25'),
(1, 'P4', '2021-11-30'),
(1, 'P1', '2022-02-05');
DECLARE @StartDate AS DATE, @EndDate AS DATE;
SELECT @StartDate = MIN([InspectionDate]), @EndDate = MAX([InspectionDate])
FROM @Vehicles;
;WITH [seq]([n])
AS (SELECT 0 AS [n]
UNION ALL
SELECT [n] + 1
FROM [seq]
WHERE [n] < DATEDIFF(DAY, @StartDate, @EndDate)),
[days]
AS (SELECT DATEADD(DAY, [n], @StartDate) AS [d]
FROM [seq]),
[inspections]
AS (SELECT [VehicleID],
[Project],
[InspectionDate],
LEAD([InspectionDate], 1) OVER (PARTITION BY [VehicleID]
ORDER BY [InspectionDate]
) AS [NextInspectionDate]
FROM @Vehicles),
[assignmentsByDay]
AS (SELECT [d].[d], [i].[VehicleID], [i].[Project]
FROM [days] AS [d]
INNER JOIN [inspections] AS [i]
ON [d].[d] >= [i].[InspectionDate]
AND [d] < [i].[NextInspectionDate])
SELECT [assignmentsByDay].[VehicleID],
[assignmentsByDay].[Project],
MONTH([assignmentsByDay].[d]) AS [month],
YEAR([assignmentsByDay].[d]) AS [year],
COUNT(*) AS [daysAssigned]
FROM [assignmentsByDay]
GROUP BY [assignmentsByDay].[VehicleID],
[assignmentsByDay].[Project],
MONTH([assignmentsByDay].[d]),
YEAR([assignmentsByDay].[d])
ORDER BY [year], [month], [assignmentsByDay].[VehicleID], [assignmentsByDay].[Project]
OPTION(MAXRECURSION 0);
And the output is:
VehicleID
Project
month
year
daysAssigned
1
P1
8
2021
12
1
P1
9
2021
14
1
P2
9
2021
5
1
P3
9
2021
11
1
P1
10
2021
1
1
P2
10
2021
20
1
P3
10
2021
10
1
P2
11
2021
14
1
P4
11
2021
16
1
P4
12
2021
31
1
P4
1
2022
31
1
P4
2
2022
4 | unknown | |
d13164 | train | In your class/component:
state = {
indexNum: 4, // arbitrary value
}
displayStatus(item) {
if(item.id > this.state.indexNum){ // Incomplete
return <View style={styles.progressPoint}><Text>I</Text></View>;
}
else if(item.id == this.state.indexNum){ // Active
return <View style={styles.progressPoint}><Text>A</Text></View>;
}
else if(item.id < this.state.indexNum){ // Complete : you can use only 'else' here
return <View style={styles.progressPoint}><Text>C</Text></View>;
}
}
In render() of your class/component:
// Positions/Pages - these will serve as basis for .map - you can add more than 'id'
const positions = [{"id": 1},{"id": 2},{"id": 3},{"id": 4},{"id": 5},
{"id": 6},{"id": 7},{"id": 8},{"id": 9},{"id": 10}];
return (
<View style={styles.container}>
{positions.map((item) => (
this.displayStatus(item)
))}
</View>
);
Here's an Expo Snack of the above (based on the scope of your question) to get you started.
You can store the index number of the page in state and update this state at completion of each progress position. Note: if you are not using redux, you may need to pass/handle state (index number) on each page individually (depending on your navigation or components structure). | unknown | |
d13165 | train | A couple of years back, this was the main way of specifying when assets expires. Expires is simply a basic date-time stamp. It’s fairly useful for old user agents which still roam unchartered territories. It is however important to note that cache-control headers, max-age and s-maxage still take precedence on most modern systems. It’s however good practice to set matching values here for the sake of compatibility. It’s also important to ensure you format the date properly or it might be considered as expired.
taken from here
After that the response is no longer cached. See here
Also worth to look at | unknown | |
d13166 | train | Regarding velocity_template.xml & API Properties Configurations
The velocity_template.xml file is used to construct the API Synapse Artifacts to deploy them in the Gateway. As it is a common template file, you have to place your conditions and customizations in the correct sections of the template to reflect in the Synapse Artifact.
If you are trying to publish a REST API, then place your code block after this section in the velocity_template.xml
#foreach($handler in $handlers)
<handler xmlns="http://ws.apache.org/ns/synapse" class="$handler.className">
#if($handler.hasProperties())
#set ($map = $handler.getProperties() )
#foreach($property in $map.entrySet())
<property name="$!property.key" value="$!property.value"/>
#end
#end
</handler>
#end
<!-- place your block here -->
This makes sure, that your handler is engaged after all mandatory Handlers of the API Managers are engaged. If you want to engage your handler in the middle, then add a condition within #foreach block to append your handler. You can follow this doc for more detailed information.
Once the velocity_template.xml changes are made, re-publish the API by selecting the Gateway environments to deploy the updated Synapse Artifacts. If it is a distributed environment, make sure to update the velocity_template.xml of the Publisher node.
Also, check for any typos in your code block: I see an extra ) at the end of the #if condition.
#if($apiObj.additionalProperties.get('encrypted') == "true"))
... | unknown | |
d13167 | train | I would use Object.keys to be able to use filter and include Array.prototype methods. Asuming that clases is the object containing all the state, I would do something like this:
const studentId = this.props.studentId; // or any other value
const clasesPerStudent = Object.keys(clases).filter(clase =>
clases[clase].alumnos.includes(studentId)
); | unknown | |
d13168 | train | There are quite a couple of stored procedures used for updating the MDS, but is is quite difficult to figure out how to use them. (And you can mess up the model if used incorrectly)
Easier would be to try the Business Rules Create method.
This is where you can get started with webservices:
http://sqlblog.com/blogs/mds_team/archive/2010/01/12/getting-started-with-the-web-services-api-in-sql-server-2008-r2-master-data-services.aspx
This is where the method is defined:
https://msdn.microsoft.com/en-us/library/microsoft.masterdataservices.services.servicecontracts.iservice.businessrulescreate(v=sql.120).aspx
You will need to weigh your options whether it would be faster to figure out how to do one of the above, or just knuckle down and create the business rules. MDS 2016 is much more user friendly than the previous versions when creating business rules(fyi) | unknown | |
d13169 | train | You dont' need mocking here. You can write a simple test such as
@Test
public void testListConversionForEmpty() {
assertThat(theConvertingMethod(emptyListOfProduct1), is(emptyListOfProduct2));
}
And then you go in, and add more test methods that act on lists with real content.
In other words: you only use mocking frameworks when creating "real" objects is too complicated.
In your case, you should simply instantiate a few Product1 and Product2 objects, put them into lists, and make sure that your conversion code delivers the expected results. Meaning: you can fully control the input without mocking anything.
( for the record: is() up there is a hamcrest matcher ) | unknown | |
d13170 | train | I suggest using an enum for clarity.
public enum Language {
En_Ja,
Ja_En
}
Language lang = Language.En_Ja;
public void Switch(object sender, EventArgs e) {
lang = lang == Language.En_Ja ? Language.Ja_En : Language.En_Ja;
}
A: Short Answer:
The from and to fields need to be initialized by the HTTP request. Something like:
partial class Translator : Page
{
string from, to;
protected override void OnLoad(EventArgs e)
{
var to = Context.Request.QueryString["to"];
var from = Context.Request.QueryString["from"];
switch(from.ToUpper())
{
case "EN": { this.from = "en"; break; }
case "JA": { this.from = "ja"; break; }
}
switch(to.ToUpper())
{
case "EN": { this.to = "en"; break; }
case "JA": { this.to = "ja"; break; }
}
if(to == null) throw new Exception();
if(from == null) throw new Exception();
}
}
Longer Answer:
This is actually a good use case for polymorphism:
/* C# 7 syntax */
abstract class LanguageSetting
{
public static LanguageSetting English = JA.alt;
public static LanguageSetting Japanese = EN.alt;
public abstract LanguageSetting Alternate { get; }
public abstract override string ToString();
sealed class EN : LanguageSetting
{
public static readonly LanguageSetting alt = new JA();
public override LanguageSetting Alternate => alt;
public override string ToString() => "en";
}
sealed class JA : LanguageSetting
{
public static readonly LanguageSetting alt = new EN();
public override LanguageSetting Alternate => alt;
public override string ToString() => "js";
}
}
Then your Submit() and Switch methods are simply:
void Submit(object Sender, EventArgs e)
{
string from = lang.ToString();
string to = lang.Alternate.ToString();
string uri = "https://api.microsofttranslator.com/v2/Http.svc/Translate?text=" +
HttpUtility.UrlEncode(text) + "&from=" + from1 + "&to=" + to1;
}
and
void Switch(object Sender, EventArgs e)
{
lang = lang.Alternate;
}
Finally, we initialize with something like:
protected override void OnLoad(EventArgs e)
{
var from = Context.Request.QueryString["from"];
switch(from.ToUpper())
{
case "EN": { lang = LanguageSetting.English; break; }
case "JA": { lang = LanguageSetting.Japanese; break; }
}
if(lang == null) throw new Exception();
} | unknown | |
d13171 | train | See this question: How can I get a precise time, for example in milliseconds in objective-c?
A: I use CADisplayLink to timing my animation. When you create a new opengl-es project on xCode, it will give you an sample opengl-es codes, and it control animation by CADisplayLink.
EDIT:
I found I misunderstand this problem.
I check the CFAbsoluteTimeGetCurrent() method on this apple on-line doc, and it said the method may be not monotonically increasing. And the method CACurrentMediaTime() which I prefered derived value by mach_absolute_time(). The document about CACurrentMediaTime() is on this link.
A: For frame-rate-independent time, I generally use the CFAbsoluteTimeGetCurrent() function. It returns a CFAbsoluteTime (which I believe is typedef'd to double) since some date arbitrarily far in the past. | unknown | |
d13172 | train | You can use this as a starting point.
You extend TimePickerDialog and added 2 methods setMin and setMax.
In the onTimeChanged method check that the new time is valid with respect to the min/max times.
It still needs some polishing though...
public class BoundTimePickerDialog extends TimePickerDialog {
private int minHour = -1, minMinute = -1, maxHour = 100, maxMinute = 100;
private int currentHour, currentMinute;
public BoundTimePickerDialog(Context context, OnTimeSetListener callBack, int hourOfDay, int minute, boolean is24HourView) {
super(context, callBack, hourOfDay, minute, is24HourView);
}
public void setMin(int hour, int minute) {
minHour = hour;
minMinute = minute;
}
public void setMax(int hour, int minute) {
maxHour = hour;
maxMinute = minute;
}
@Override
public void onTimeChanged(TimePicker view, int hourOfDay, int minute) {
super.onTimeChanged(view, hourOfDay, minute);
boolean validTime;
if(hourOfDay < minHour) {
validTime = false;
}
else if(hourOfDay == minHour) {
validTime = minute >= minMinute;
}
else if(hourOfDay == maxHour) {
validTime = minute <= maxMinute;
}
else {
validTime = true;
}
if(validTime) {
currentHour = hourOfDay;
currentMinute = minute;
}
else {
updateTime(currentHour, currentMinute);
}
}
} | unknown | |
d13173 | train | You should take a look at the Vector matching section, especially at the on/ignoring keywords. If you just want to do an operation like counterA + counterB a query similar to:
counterA + ignoring(target_base_url) counters
should work. This will sum both counters matching on all labels except the target_base_url. If you want to match only on a subset of the labels you could use the on keyword. | unknown | |
d13174 | train | I can access page from outside world. When I'm on local network I can not access it with external IP. | unknown | |
d13175 | train | In Delphi 7, all TForm windows are owned by the hidden TApplication window at runtime, which is the window that actually manages the app's Taskbar button. That window remains on the primary monitor when you move your Forms to other monitors. That is why you don't see the app's Taskbar button move to other monitors.
In Delphi 2007 and later, TForm windows are no longer owned by the hidden TApplication window by default on Vista+. This behavior is controlled by the TApplication.MainFormOnTaskBar property, which did not exist yet in Delphi 7. Being owned by the hidden TApplication window causes all kinds of problems in Vista+ for the Taskbar, the Task switcher, Aero, etc, so ShowMainFormOnTaskBar should always be set to true.
When you upgrade your Delphi 7 project to Delphi 10.2, be sure to set Application.MainFormOnTaskBar := true; in the app's main startup code so the app interacts with Vista+ properly. MainFormOnTaskBar is false by default when migrating a pre-D2007 project. | unknown | |
d13176 | train | You don't have to change the css of the button, but from the whole item:
just add:
.item{
height: 380px;
}
Of course, you have to care about the maximum item-height: your value must not be less, or the price won't be visible anymore.
In this case, min-height would be the better alternative.
A: I would recommend setting a min-height: 370px; for the easiest solution.
You do not want to set a static height for this because if you have an item with a longer description it will not automatically add space but just cram everything in.
A: Add a static height to .item
height:375px;
The height:auto; declaration tells .item to expand as big as it needs to be to fit everything in, so the tops of the divs line up, but since they are different heights, the bottoms are staggered.
As some of my co-responders have noted, min-height is also an acceptable option, until you have an item with enough text that item expands past the min-height value, at which point they will begin to expand and stagger again.
A: This should point you in the right direction: http://jsfiddle.net/v9grm/
Create a grid and with the help of display: table make the columns the same height. Then place the button at the bottom of the column with position: absolute. | unknown | |
d13177 | train | I don't fully understand the reason to have a global TextController (and I don't think its a good idea either) but using Riverpod it would look somethink like this:
import 'package:flutter/material.dart';
import 'package:flutter_riverpod/flutter_riverpod.dart';
class AccountData {
final String firstName;
final String lastName;
final String phoneNumber;
AccountData({
required this.firstName,
required this.lastName,
required this.phoneNumber,
});
factory AccountData.fromJson(Map<String, dynamic> json) {
return AccountData(
firstName: json['firstName'],
lastName: json['lastName'],
phoneNumber: json['phoneNumber'],
);
}
}
Future<Map<String, dynamic>> fetchAccountData() async {
final response = {
'firstName': 'Name',
'lastName': 'LastName',
'phoneNumber': '98786758690',
};
return Future.delayed(const Duration(seconds: 3), () => response);
}
final futureData = FutureProvider<AccountData>((_) async {
final data = await fetchAccountData();
return AccountData.fromJson(data);
});
final firstNameController = ChangeNotifierProvider<TextEditingController>((ref) {
final controller = TextEditingController();
ref.listen<AsyncValue<AccountData>>(futureData, (_, curr) {
curr.whenOrNull(
data: (d) => controller.text = d.firstName,
);
});
return controller;
});
final lastNameController = ChangeNotifierProvider<TextEditingController>((ref) {
final controller = TextEditingController();
ref.listen<AsyncValue<AccountData>>(futureData, (_, curr) {
curr.whenOrNull(
data: (d) => controller.text = d.lastName,
);
});
return controller;
});
void main() {
runApp(const ProviderScope(child: MyApp()));
}
class MyApp extends StatelessWidget {
const MyApp();
@override
Widget build(BuildContext context) {
return MaterialApp(
debugShowCheckedModeBanner: false,
home: Scaffold(
body: Center(
child: MyWidget(),
),
),
);
}
}
class MyWidget extends ConsumerWidget {
@override
Widget build(BuildContext context, WidgetRef ref) {
final firstController = ref.watch(firstNameController.notifier);
final lastcontroller = ref.watch(lastNameController.notifier);
return ref.watch(futureData).when(
loading: () => const CircularProgressIndicator(),
error: (e, _) => TextButton(
child: Text('$e'),
onPressed: () => ref.refresh(futureData),
),
data: (_) {
return Column(
children: [
TextField(
controller: firstController,
),
TextField(
controller: lastcontroller,
),
]
);
}
);
}
}
Now following Hazar Belge using a late initializer is a good idea, without state managament the idea would be simply use a FutureBuilder and then pass down the data to an StatefulWidget which initialize the TextEditingControllers with that data, and again using didUpdateWidget, all of it without depending of an external package (the bigger your app you would need to use more packages to help you develop fast but I would recommend try with the basic to grasp all you can do with the framework)
A: You can use late initialization like this:
late TextEditingController editedFirstName;
late TextEditingController editedLastName;
return SafeArea(
child: WillPopScope(
onWillPop: () async {
Navigator.pushReplacement(
context,
MaterialPageRoute(
builder: (context) => const ProfileScreen(),
),
);
return shouldPop;
},
body: FutureBuilder<Response>(
future: futureData,
builder: (context, snapshot) {
if (snapshot.hasData) {
AccountData data3 = AccountData.fromJson( << data3 has data3.firstName, and
data3.lastName
json.decode(snapshot.data!.body),
);
editedFirstName = TextEditingController(text: data3.firstname)
editedLastName = TextEditingController(text: data3.lastname) | unknown | |
d13178 | train | You are using xelatex, so you can use whatever font you have installed on your operating system
---
title: ''
output:
pdf_document:
latex_engine: xelatex
keep_tex: yes
geometry: left=0.35in,right=0.35in,top=0.3in,bottom=0.3in
header-includes:
- \usepackage{graphicx}
- \usepackage{fancyhdr}
- \usepackage{fontspec}
- \pagestyle{fancy}
- \renewcommand{\headrulewidth}{0.0pt}
fontsize: 9pt
---
Global
\begingroup
\setmainfont{Arial}
\fontsize{18}{16}\selectfont
\textcolor{black}{tasks}
\endgroup
Global | unknown | |
d13179 | train | Laravel outof the box, comes with functionality to connect with SQL server. I don't think you need to create a custom class on your own.
You might need to configure Laravel to connect to your SQL server though. Have a look at the link below for setting up Laravel for SQL server...
https://laravel.com/docs/5.6/database | unknown | |
d13180 | train | If you mean RXT nesting by nested templates, WSO2 Governance Registry does not support nested RXTs as of now. | unknown | |
d13181 | train | yes accepts an argument, y is just the default value.
You can use:
yes n | command
How cool is that?
Pro tip:
# Use this and see what happens
yes maybe | command
Note: Not every command implements maybe. | unknown | |
d13182 | train | URL fragments are used by the client (a web browser) to help determine how to handle the content that it's trying to display.
This can be anchors to jump to a certain part of a page, or used by javascript to send information between other scripts. This is not your htaccess file as fragments are never sent to the server (where your htaccess file lives).
Try selectively removing javascript (especially scripts that you load from 3rd party sites that share links like "Add To Any"), otherwise make sure you have no links in your content that look like this:
some/path/file.html# | unknown | |
d13183 | train | There is enough scope for misunderstanding and ambiguity in a cryptographic algorithm that it is standard practice to release example inputs and outputs (test vectors) with the specification of the algorithm, so you don't have to work through the algorithm by hand. There appear to be test vectors in the specfication at http://csrc.nist.gov/groups/STM/cavp/. In fact, appendix B of fips-197.pdf appears to show how the state table evolves during a single encryption.
Of course, for a system like AES, where it is not practical to test every possible input and key, you can always argue that while testing can find errors it can never prove the absence of errors.
A: Anything a computer can do - you can do as well.... they don't use magic for calculation.
To test your algorithm, you should encrypt a small text and than decrypt and see that you get the same results (it will prove the algorithm works, it won't prove it's AES....) | unknown | |
d13184 | train | ng-view creates its own scope. What is likely happening is that you are creating $scope property userMovies on this ng-view child scope, which your MovieboxCtrl scope can not see/access. As @darkporter alluded to in a comment, your MovieboxCtrl should create an empty userMovies array. Then when the MovieCtrl sets userMovies, it will update the existing object reference, rather than create a new property.
Here is a plunker that stores the data in the service, rather than in a controller.
angular.module('moviebox.services', [])
.factory('movieboxApi', function() {
var model = { movies: [] };
model.getMoviesByTitle = function(query) {
angular.copy([{'title': 'foo'},{'title': 'foo1'},
{'title': 'foo2'},{'title': 'foo3'},
{'title': 'foo4'}], model.movies);
};
return model;
});
app.controller('MainCtrl', function($scope, movieboxApi) {
$scope.getMoviesByTitle = function() {
movieboxApi.getMoviesByTitle($scope.query);
};
$scope.userMovies = movieboxApi.movies;
});
app.controller('MovieboxCtrl', function ($scope, movieboxApi) {
$scope.userMovies = movieboxApi.movies;
});
HTML change:
<body ng-controller="MainCtrl">
angular.copy() is used because the controllers have a reference to the model.movies array that the service initially created. We want to modify that array, not assign a new one. | unknown | |
d13185 | train | Try this.
<property name="regProperty" expression="get-property('registry', 'gov:/data/xml/collectionx@abc')"/>
Ref: http://movingaheadblog.blogspot.com/2015/09/wso2-esb-how-to-read-registry-property.html
A: To get the value of test1, use following property.
<property expression="get-property('registry', 'gov:/apimgt/customsequences/in/Seq1.xml@test1')" name="test_property5"/>
From above example the location of resource is showing /_system/governance/apimgt/customsequences/in/Seq1.xml. So to make its path, we will use subpath and leave /_system/governance from the beginning. The path will be gov:/apimgt/customsequences/in/Seq1.xml. Now to access test1 property, simply append @test1 to the path. | unknown | |
d13186 | train | You can edit the unit test suite by finding the test-suite reference in the .cabal file of the project.
To do this, go to your project directory and open *.cabal in a text editor and search for the line containing test-suite:. This line will be of the form test-suite: ExampleTests, where ExampleTests is the main file of the test suite for the project.
Simply add tests to this file using the testing framework of your choice. Leksah will run these tests automatically through the IDE GUI. | unknown | |
d13187 | train | So, you want to do something in the watch app extension and based on the results, schedule a UILocalNotification that will be sent to the phone at some point?
You cannot directly schedule a UILocalNotification from the watch because you don't have access to the UIApplication object. See Apple staff comment here. However, using Watch Connectivity, you could send a message from the watch to the phone and have the phone app create and schedule it in the background. The watch will display the notification iff the phone is locked at the trigger time. The watch's notification scene will be invoked in that case.
A: Assuming you want to send the notification from the phone to the watch: You can use UILocalNotification to send a notification to the watch - but the watch must be locked to receive it. There's also no guarantee when the Watch OS will turn on the watch to run the code that receives your notification, so your notification may arrive minutes or hours after it's sent. | unknown | |
d13188 | train | Realized that you are unable to see where the counters are incremented;; the counters are boolean statements at the end of each block of code and are set to true/false during buildtime. | unknown | |
d13189 | train | Arrays.toString(byte[] a) "Returns a string representation of the contents of the specified array." It does not convert a byte array to a String. Instead try using:
new String(decryptAES(text1), "UTF-8"); | unknown | |
d13190 | train | I found out this is an issue with Android KitKat, if you want to check it out:
https://code.google.com/p/android/issues/detail?id=63793
https://code.google.com/p/android/issues/detail?id=63618
A: My simple workaround:
It sets the alarm with a 1 sec delay and stops the service by calling stopForeground().
@Override
public void onTaskRemoved(Intent rootIntent) {
super.onTaskRemoved(rootIntent);
if (Build.VERSION.SDK_INT >= 19) {
Intent serviceIntent = new Intent(this, MessageHandlerService.class);
serviceIntent.setAction(ACTION_REFRESH);
serviceIntent.addFlags(Intent.FLAG_RECEIVER_FOREGROUND);
AlarmManager manager = (AlarmManager) getSystemService(ALARM_SERVICE);
manager.set(AlarmManager.RTC, System.currentTimeMillis()+1000, PendingIntent.getService(this, 1, serviceIntent, 0)); //1 sec delay
stopForeground(true);
}
} | unknown | |
d13191 | train | graph = { "a" : ["c"],
"b" : ["c", "e"],
"c" : ["a", "b", "d", "e"],
"d" : ["c"],
"e" : ["c", "b"],
"f" : [] }
def max_length(x):
return len(graph[x])
# Determine what index has the longest value
index = max(graph, key=max_length)
m = len(graph[index])
# Fill the list with `m` zeroes
out = [0 for x in range(m+1)]
for k in graph:
l = len(graph[k])
out[l]+=1
print(out)
Outputs [1, 2, 2, 0, 1]
A: Another solution using Counter :
from collections import Counter
a = Counter(map(len, graph.values())) # map with degree as key and number of nodes as value
out = [a[i] for i in range(max(a)+1)] # transform the map to a list
A: You can find the degrees of individual nodes by simply finding lengths of each element's list.
all_degrees = map(len, graph.values())
This, in your case produces the individual degrees, not necessarily in same order as the elements.
[1, 4, 2, 2, 1, 0]
Next thing is simply frequency count in the list.
from collections import defaultdict
freq = defaultdict(int)
for i in all_degrees:
freq[i] += 1
print freq
Out: defaultdict(<type 'int'>, {0: 1, 1: 2, 2: 2, 4: 1})
As expected, freq now gives count of each values which you can then print, append to list etc. You can simply print values of the dictionary freq as
print freq.values()
Returns the desired list [1, 2, 2, 0, 1]. Or you can create an empty list and append values to it as
out = list()
for i in range(max(all_degrees)+1):
out.append(freq[i])
Again returns out = [1,2,2,0,1] - the required output. | unknown | |
d13192 | train | I can't speak for Google App Engine, but as a rather recent Django user myself I recently moved my development site over to a WebFaction server and I must say I was extremely impressed. They are extremely friendly to Django setups (among others) and the support staff answered any small problems I had promptly. I would definitely recommend them.
For other Django-friendly hosts, check out Djangofriendly.com.
A: If you have already written your django application, it may be really difficult to install it on Google App Engine, since you will have to adapt your data model. GAE uses big table, a (key,data) store, instead of a traditional relational model. It is great for performance but makes your programming more difficult (no built in many-to-many relationship handlers, for example).
Furthermore, most apps available for django will not work on GAE since these apps use the relational data model. The most obvious problem is that the great admin app of django will not work. Furthermore, GAE tends to make you use google accounts for identification. This can be circumvented but again, not using readily available django apps. This could be great for you, but it can be a hassle (for example, lots of user names are already taken at google).
So, my final advice is that, if you are a beginner, you should avoid GAE.
If you are based in Europe, djangohosting.ch is also a good choice, instead of webfaction.
A: A bit late with my answer, but nevertheless... I am Django beginner and have my first Django App up and running at GAE. It was App Engine Patch that made it happen. Using it you have django admin and several other apps available out of the box. If you'd like to try it, go for the trunk version. This project is reasonably well documented and have responsive community.
A: I'm a Google app engine developer, so I can't say much about webfaction, but as far as I have used it setting up a web app with app-engine is pretty straight forward¹. The support staff however is not quite good.
1- http://code.google.com/appengine/articles/django.html
A: Webfaction:
Plus:
*
*Great shell access. Ability to install python modules, or anything else you might need. You will love checking out source code from shell to update your production (no need for FTPing anything anymore!)
*Very good performance and reliability
*Great support + wealth of info on help knowledge base and in the forums. (FORGET bluehost or anything else you ever tried). I was surprised by amount of answers I found to what I thought would be difficult questions.
*You can use regular database and you can do joins (see app engine minus #2)
Minus:
*
*Setting up initial deployment can be a bit tricky the first few times around (as is to be expected from shell).
*Growing-scaling can be expensive and you probably will not survive beign "slashdotted"
App Engine
Plus:
*
*Free to start with
*Initial database is easier to setup.
*Deployment is a breeze
*Enforcement of "good" design principles from the start which help you with #5. (Such as hard limits, db denormalizing etc)
*Scalability (but this does not come free - you need to think ahead).
*No maintanence: auto backups, security comes for free, logging + centralized dashboard, software updates are automatic.
Minus:
*
*Setting up Django on App Engine is not so straightforward, as well as getting used to this setup. The webapp framework from google is weak.
*Database model takes a little bit of time to wrap your head around. THis is not your moma's SQL server. For example you have to denormalize your DB from the start, and you cannot do Joins (unless they are self joins)
*The usual things you are used to are not always there. Some things such as testing and data-importing are not that easy anymore.
*You are tied down to App Engine and migrating your data to another DB or server, while not impossible, is not easy. (Not that you do data migration that often! Probably never)
*Hard limits in requests, responses and file sizes (last time I heard about 1MB).
*App Engine currently supports Python 2.5 only.
Can't think of anything else so far.
I am currently with Webfaction and am testing App Engine as well. I have no difficulty going from Django-Webfaction to App-Engine way of thinking. However, I am not sure if the AppEngine -> Standalone servers route would be just as easy.
References
Talks:
*
*Guido on Google App Engine http://www.youtube.com/watch?v=CmyFcChTc4M
*Task Queues in App Engine: http://www.youtube.com/watch?v=o3TuRs9ANhs
A: The thing to remember about GAE is that it works differently than a standard python install and apps you have may not work well (or at all) in that environment. The biggest difference is the database. While there are advantages to the non-relational database available with GAE, you need to treat it differently and there are many things that your code may be expecting your database to be able to do that it cannot.
If you are starting from scratch on an app, either platform would work fine. If you have an existing python app, getting it to work on GAE will take considerable work. | unknown | |
d13193 | train | From N4296 (first draft after final C++14) [15.1p3]:
Throwing an exception copy-initializes (8.5, 12.8) a temporary object,
called the exception object. The temporary is an lvalue and is used
to initialize the variable declared in the matching handler (15.3).
So you can't assume that your temporary "survives the throw". If throwing, the copy constructor of an exception object of type std::exception will be called with e as the argument. The temporary that e is bound to will be destroyed when control leaves the full expression containing the call to my_assert (either after a normal return or as part of stack unwinding, since you're conditionally throwing the exception).
There are circumstances when the copy construction of the exception object can be elided, but this is not one of them, according to [12.8p31.2]:
— in a throw-expression (5.17), when the operand is the name of a
non-volatile automatic object (other than a function or catch-clause
parameter) whose scope does not extend beyond the end of the innermost
enclosing try-block (if there is one), the copy/move operation from
the operand to the exception object (15.1) can be omitted by
constructing the automatic object directly into the exception object
(emphasis mine) | unknown | |
d13194 | train | So it sounds like you want to append li elements to an existing ul with the ID "msg", where the content of each li comes from the .msg property of each entry in the jsonData:
jQuery.getJSON("http://127.0.0.1/conn_mysql.php", function (jsonData) {
var markup;
markup = [];
jQuery.each(jsonData, function (i, j) {
// Note: Be sure you escape `msg` if you need to; if it's already
// formatted as HTML, you're fine without as below
markup.push("<li>");
markup.push(j.msg);
markup.push("</li>");
});
jQuery('#msg').append(markup.join(""));
});
Here's a working example. (See the "off-topic" comment below; the working example uses one of the tricks mentioned.)
Or if the new lis should replace the existing ones in the ul, use
jQuery('#msg').html(markup.join(""));
...instead of append.
Either way, that uses the well-known "build it up in an array and then join it" idiom for building up markup, which is almost always more efficient (in JavaScript implementations) than doing string concatenation. (And building up the markup first, and then applying it with a single DOM manipulation call [in this case via jQuery] will definitely be more efficient than adding each li individually).
If you're not concerned about efficiency (this is showing the result of an Ajax call, after all, and if you know there are only a couple of lis to append...), you could do it like this:
jQuery.getJSON("http://127.0.0.1/conn_mysql.php", function (jsonData) {
var ul;
ul = jQuery('#msg');
// If you want to clear it first: ul.empty();
jQuery.each(jsonData, function (i, j) {
// Note: Be sure you escape `msg` if you need to; if it's already
// formatted as HTML, you're fine without as below
ul.append("<li>" + j.msg + "</li>");
});
});
...but if you're dealing with a lot of items, or if you were doing this in something that happened frequently, you're probably better off with the build-it-up version.
Off-topic: Your original code mixes using the jQuery symbol and the $ symbol. Probably best to stick to one or the other, so I went with jQuery above. If you're not using noConflict, or if you're doing one of the nifty tricks for using noConflict but still being able to use $, you can safely change the above over to using $ throughout. | unknown | |
d13195 | train | Well first you have to have in mind that the values of ColumnDefinition Width and RowDefinition Height are not of type Double but of type GridLength
And after that there are two scenarios that I can think of:
*
*Binding to another element's value
*Binding to value from the ViewModel or the code behind
Case 1:
If You're binding to some value that is double you will need to also use a Converter to convert this value to GridLength
Case 2:
If You're binding to something in the code you could create the property of type GridLength and bind directly, or if the value is double again use Converter like in the previous use case.
Some References on the type
GridLength Structure
GridUnitType Enumeration
*
*Auto - The size is determined by the size properties of the content object.
*Pixel - The value is expressed as a pixel.
*Star - The value is expressed as a weighted proportion of available space.
Edit - Just a simple example of working binding
Still didn't manage to find time to recreate your exact situation so I just used GridView (as it has also header) - Content is purple, header consists of two grid columns - green and red, green is bound to dependency property defined in main page
XAML
<Page
...
x:Name="root">
<Page.Resources>
<Style TargetType="GridView" >
<Setter Property="HeaderTemplate">
<Setter.Value>
<DataTemplate>
<Grid x:Name="GridHeader" Height="200">
<Grid.ColumnDefinitions>
<ColumnDefinition Width="5*"/>
<ColumnDefinition Width="{Binding ElementName=root, Path=TestGridLength}"/>
</Grid.ColumnDefinitions>
<Grid Background="Red" Grid.Column="0"></Grid>
<Grid Background="Green" Grid.Column="1"></Grid>
</Grid>
</DataTemplate>
</Setter.Value>
</Setter>
</Style>
</Page.Resources>
<GridView Background="Purple">
</GridView>
</Page>
Code behind
public GridLength TestGridLength
{
get { return (GridLength)GetValue(TestGridLengthProperty); }
set { SetValue(TestGridLengthProperty, value); }
}
public static readonly DependencyProperty TestGridLengthProperty =
DependencyProperty.Register(
"TestGridLength",
typeof(GridLength),
typeof(MainPage),
new PropertyMetadata(null));
public MainPage()
{
this.InitializeComponent();
this.TestGridLength = new GridLength(10, GridUnitType.Star);
} | unknown | |
d13196 | train | The .parentNode method is for a DOM element, not a d3 selection.
To access the DOM element from a d3 selection is a little tricky, and there are often better ways to achieve what you want to do. Regardless, if you have a single element selected, it will be the first and only item within the first item of a d3.selection - i.e. you need to access it like so:
var someIdDOMelement = d3.select("#someid")[0][0];
An edited example of your original using this: http://jsfiddle.net/cpcj5/1/
If you have the id and specifically want to get the DOM element, I'd just go with getElementById:
var someIdDOMElement = document.getElementById("someid")
Looking at d3 documentation for binding events using on, we can see that, within the function you bind, the this variable is the DOM element for the event which was triggered. So, when dealing with event handling in d3, you can just use this to get the DOM element from within a bound function.
But, looking at your code, I think it's easier to stick with the original implementation for re-appending the node using the original javascript methods, and then creating a d3 selection using d3.select on the dom element bound to this. This is how the original code does it: http://jsfiddle.net/cpcj5/3/
If you have any difficulties caused by this, please comment so I can address them.
A: You can also use selection.node() in D3:
var thisNode = d3.select("#" + id);
thisNode.node().parentNode.appendChild(thisNode.node()); | unknown | |
d13197 | train | Could you run avdec_h264 and then when you hit errors send a PLI?
You could parse the H264 bitstream yourself and perform an action on certain conditions.
*
*Did I get a corrupted I-Frame
*After n corrupted P/B Frames
I am going to go do some reading myself. I wonder how hard it would be to determine the amount of broken macroblocks. | unknown | |
d13198 | train | It looks like you forgot to add the q field when rebuilding the query with a pageToken field. | unknown | |
d13199 | train | The first two are ⁡ function application and ⁢ invisible times. They help indicate semantic information, see this Wikipedia entry
The last one, , could be anything since it lies in the Unicode Private Use Area which is provided so that font developers can store glyphs that do not correspond to regular Unicode positions. (Unless it's a typo and really 6349 in which case it's a a Han character.) | unknown | |
d13200 | train | Map string to a class instance by instantiating classes and saving them (probably in a hash). All the classes must implement the same interface of course.
You'll find that if you code this way a better structure starts to emerge from your code--for instance you might find that where before you might have used 2, 3 or 10 similar methods to do slightly different things, now the fact that you can pass data into your constructor allows you to do it all with one or two different classes instead.
This interface and the classes that implement it (for me at least) nearly always evolve into a full-featured set of classes that I needed all along but might not have recognized otherwise.
Somehow I never seem to regret writing code the "Hard" way, but nearly always regret when I choose the easier path.
A: I'd go with what Bill K suggested in regards to implementing the same interface. But if you have the issue of wanting to call methods with different names you could try using reflection and do something like this:
Method method = Foo.class.getDeclaredMethod("methodName", parametersTypes); // Get the method you want to call
Foo foo = new Foo();
method.invoke(foo, args); // invoke the method retrieved on the object 'foo' with the given arguments
A: What do people think of this?
public static enum Tags {
TAG1, TAG2, TAG3
}
public class Stuff {
...
switch (Tags.valueOf(str)) {
case TAG1: handleTag1(); break;
case TAG2: handleTag2(); break;
case TAG3: handleTag3(); break;
}
}
The upside is that this is concise and efficient (at least in this case). The downside is that it is not so good with mixed case tags and tags with Java non-identifier characters in them; e.g. "-". (You either have to abuse accepted Java style conventions for the enum member identifiers, or you have to add an explicit String-to-enum conversion method to the enum declaration.)
Using a switch statement for dispatching is evil in some peoples' book. But in this case, you need to compare what you are gaining with what you are loosing. And I'd be surprised if polymorphic dispatching would give a significant advantage over a switch statement in terms of extensibility and maintainability.
A: you can invoke the method using reflection:
Class.getMethod
therefore you don't need a switch or a set of ifs.
A: Here is an example of the proposal of Bill K (if I understood it right)
public class Example {
static interface TagHandler {
void handle(String tag);
}
static final Map<String, Example.TagHandler> tagHandlers = new HashMap<String, Example.TagHandler>() {
{
put("tag_1", new Example.TagHandler() {
public void handle(String tag) {
System.out.println("Handling tag_1: " + tag);
}
});
put("tag_2", new Example.TagHandler() {
public void handle(String tag) {
System.out.println("Handling tag_2: " + tag);
}
});
}
};
public static void main(String[] args) {
String[] tags = { "tag_1", "tag_2", "tag_1" };
for (String tag : tags) {
tagHandlers.get(tag).handle(tag);
}
}
}
A: An indirect answer: XML typically represents data, not instructions. So it is probably more useful to map parser handling onto fields. This is what JAXB does. I suggest using JAXB or similar.
Unless you have a huge amount to do, I would strongly advise against reflection in a statically typed language. A string of } else if (tag.equals("blah")) { (or with interning, } else if (tag == "blah") { isn't going to kill you. You can even map strings onto their enum namesakes, but that is a little reflectiony. Switch-on-string should be with us in JDK7. | unknown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.