_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d16801 | val | Have you called FlushFinalBlock() on the CryptoStream on the sending side?
using (var stream = new MemoryStream())
{
using (var cs = new CryptoStream(stream, your_encryptor, CryptoStreamMode.Write))
{
your_formatter.Serialize(cs, your_graph);
cs.FlushFinalBlock();
your_socket.Send(stream.GetBuffer(), 0, (int)stream.Length);
}
} | unknown | |
d16802 | val | You can call child function using a pointer or a reference of Parent class that points or references the child object.
For example:
#include <iostream>
struct A
{
virtual void Function() //Defination
{ std::cout << "Super_class: A\n"; }
virtual ~A() {}
};
struct B
{
virtual void Function() { std::cout << "Child_class\n"; }
};
int main()
{
A *obj1 = new B;
obj1->Function();
B obj2;
obj2.A::Function(); //Calling to Parent class function(A)
obj2.Function(); //Calling to Child class function(B)
}
A: You're unable to achieve that with your current code design. Class A does not inherit from Class C and Afunction() doesn't get passed a reference to Class C.
You can either make A inherit C like you have with B inheriting A or you can pass a reference to the function like so
class A
{
void Afunction(C* CObject);
};
Furthermore you also can't call child class functions from parent class functions when talking about inheritance. Inheritance works from Top down not from the bottom up.
For example
class A
{
void Afunction();
};
class B : public A
{
void Afunction()
{
A::Afunction(); // This is valid
}
}
This is not valid code.
class A
{
void Afunction()
{
B::Afunction(); // This is not valid as A::Afunction() does not inherit from B::Afunction()
}
};
class B : public A
{
void Afunction();
} | unknown | |
d16803 | val | You've probably found an answer by now anyway, but here's my 2p worth for anyone else who comes across this.... you probably want to use a sub-aggregation on product_name rather than a second aggregation on the whole dataset.
Something like this (untested code, but based on a working part of one of my projects):
.query {rangeQuery("date") gte "01-01-2018" lte "31-12-2018" }
.aggs { termsAgg("s1","product_name").subAggregations(
sumAgg("sums","total_sum")
)
}
The results come back as a bunch of nested Map[String,Any] which take a bit of sorting through, but some logging/print statements and a bit of trial and error sorted it out for me.
Reference is here: https://github.com/guardian/archivehunter/blob/47372d55d458cfe31e5d9809910cc5d9a4bbb9bf/app/controllers/SearchController.scala#L203, in that case I am processing it down for rendering in a browser frontend with ChartJS.
Apologies for brevity, but I'm on the hop at the moment and haven't got long to post :) | unknown | |
d16804 | val | VirtualEnv is done to handle those case.
virtualenv is a tool to create isolated Python environments.
Using virtualenv, you will be able to create 2 environements, one with the sip.pyd in version 8.x another in version 6.0
A: Assuming you don't have a piece of code needing both files at once. I'd recommend the following:
*
*install both files in 2 separate directories (call them e.g. sip-6.0 and sip-8.0), that you'll place in site-packages/
*write a sip_helper.py file with code looking like
sip_helper.py contents:
import sys
import re
from os.path import join, dirname
def install_sip(version='6.0'):
assert version in ('6.0', '8.0'), "unsupported version"
keep = []
if 'sip' in sys.modules:
del sys.modules['sip']
for path in sys.path:
if not re.match('.*sip\d\.\d', path):
keep.append(path)
sys.path[:] = keep # remove other paths
sys.path.append(join(dirname(__file__), 'sip-%s' % version))
*
*put sip_helper.py in site_packages (the parent directory of the sip-6.0 and sip-8.0 directories)
*call sip_helper.install_sip at the startup of your programs
A: I don't know if that works (if a module's name has to match its contents), but can't you just rename them to sip6.pyd resp. sip8.pyd and then do
if need6:
import sip6 as sip
else:
import sip8 as sip
? | unknown | |
d16805 | val | This one is more tricky. I don't see any reference to a string, so the TypeError is really weird.
Nonetheless, you may patch the generated .cxx file with this gist https://gist.github.com/phurni/5081001 . As you see it simply adds a bunch of printf() to trace the call to #initialize. You may follow this pattern to track it down, maybe editing your question with some more info, or the updated irb session (showing the trace).
UPDATE
To make it short, it seems that the Qxt lib you generate and the Qt ruby lib you use are not generated by the same version of SWIG. This wouldn't be a problem for separated libs, but because your Qxt lib will interop with the Qt lib (you pass the ui argument which is a Qt wrapped object to your own Qxt wrapped object), both MUST be wrapped by the same version (at least the minor?) of SWIG.
Back to technical detail:
The exception raised comes from the call of SWIG_ConvertPtron line 1984 which in turn calls SWIG_Ruby_MangleStr. This function tries to get an instance variable @__swigtype__ on the passed argument which is ui in your code. This is to be able to type check (on the C++ side) the passed argument. It seems that this variable is nil (because it comes from Qt wrapped differently without using such a variable), and the code in SWIG_Ruby_MangleStr WANTS to convert it to a String.
Conclusion:
I don't know a way to determine which version of SWIG wrapped an existing lib, if you find one, you may get the one that wrapped the Qt lib and use that version to wrap your Qxt lib.
The other way is to generate the Qt libs with a known version of SWIG and do the same for your Qxt lib. | unknown | |
d16806 | val | according to your input the string in the fscanf is wrong.
you use the following string:
%c %d %d %d %c %d %d %d %d %d %d"
this is example of input line:
25 55 22 N 123 213 123 S 25 23 2
you need the following string:
%d %d %d %[^A-Z] %c %d %d %d %[^A-Z] %c %d %d %d
based on the given example, you read 25,55,22 using %d.
then and you skipping spaces etc using %[^A-Z], and then you read N or S using %c.
space is also a character, you need to pay attention to it. | unknown | |
d16807 | val | Try calling setRetainInstance(true) after super.onActivityCreated(savedInstanceState);
here is the documentation of setRetainInstance(boolean)
Activity :
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main2);
FragmentManager manager = getSupportFragmentManager();
Fragment myFragment = manager.findFragmentByTag("myFragment");
if (myFragment == null) {
myFragment = new ParentFragment();
}
manager.beginTransaction().add(R.id.fragmentHolder, myFragment, "myFragment").commit();
}
}
Fragment:
public class ParentFragment extends Fragment{
@Override
public void onActivityCreated(@Nullable Bundle savedInstanceState) {
super.onActivityCreated(savedInstanceState);
setRetainInstance(true);
}
}
A: I am just give you the information.
you can check the Orientataion of device this way and set it layout.
@Override
public void onConfigurationChanged(Configuration newConfig) {
super.onConfigurationChanged(newConfig);
if (newConfig.orientation==Configuration.ORIENTATION_LANDSCAPE){
setContentView(R.layout.table_horizontal);
button1 = (Button)findViewById(R.id.button1);
button1.setOnClickListener(this);
}else if (newConfig.orientation==Configuration.ORIENTATION_PORTRAIT){
setContentView(R.layout.table_vertical);
button1 = (Button)findViewById(R.id.button1);
button1.setOnClickListener(this);
}
} | unknown | |
d16808 | val | private void language1ActionPerformed(java.awt.event.ActionEvent evt) {
RSyntaxTextArea syntaxTextArea = new RSyntaxTextArea(6, 20);
String lang = (String) language1.getSelectedItem();
syntaxTextArea.setSyntaxEditingStyle(SyntaxConstants.SYNTAX_STYLE_JAVA);
msg1 = syntaxTextArea;
msg1.setVisible(true);
The above code does nothing. You haven't actually added the syntax area component to the frame.
Don't create a new RSyntaxArea object!
Instead you need to make the syntaxTextArea an instance variable in your class then you can reference the variable from the listener and just change the editing style property.
Or if for some reason the class doesn't allow you to dynamically change the property, then you will need to use:
sp.setViewportView(syntaxTextArea);
in you listener code. In which case the scrollpane variable will now need to be an instance variable in your class.
Either way you will need to create an instance variable that can be referenced from your ActionListener so you will need to restructure your code. Read the Swing tutorial on How to Use Text Areas. The demo code there will show you how to better structure your classes. | unknown | |
d16809 | val | .then(response => {
const fetchedData = [];
for(let key in response.data){
fetchedData.push({
...response.data[key],
id: response.data[key].name //made a small change here
});
}
dispatch(FetchPostSuccess(fetchedData));
}) | unknown | |
d16810 | val | Solution is to add the Domain | Enterprise Administrator groups to the Local Administrators Group, not deleting it. | unknown | |
d16811 | val | The documentation of the Python C-API states:
Note Since Python may define some pre-processor definitions which affect the standard headers on some systems, you must include Python.h before any standard headers are included.
It is very likely that some of the Qt headers include standard headers (as evident from the error you get, it does include /usr/include/features.h, or example), therefore #include <Python.h> should be placed before the Qt headers. In fact, it should generally be placed before any other include-statement.
Note that this is the case with Python 2.7, too. If a different include order works for you with Python 2.7, then you are simply lucky. | unknown | |
d16812 | val | Lets take a basic program:
class Program
{
static void Main(string[] args)
{
Foo();
}
public static void Foo(int i = 5)
{
Console.WriteLine("hi" +i);
}
}
And look at some IL Code.
For Foo:
.method public hidebysig static void Foo([opt] int32 i) cil managed
{
.param [1] = int32(0x00000005)
// Code size 24 (0x18)
.maxstack 8
IL_0000: nop
IL_0001: ldstr "hi"
IL_0006: ldarg.0
IL_0007: box [mscorlib]System.Int32
IL_000c: call string [mscorlib]System.String::Concat(object,
object)
IL_0011: call void [mscorlib]System.Console::WriteLine(string)
IL_0016: nop
IL_0017: ret
} // end of method Program::Foo
For Main:
.method private hidebysig static void Main(string[] args) cil managed
{
.entrypoint
// Code size 9 (0x9)
.maxstack 8
IL_0000: nop
IL_0001: ldc.i4.5
IL_0002: call void ConsoleApplication3.Program::Foo(int32)
IL_0007: nop
IL_0008: ret
} // end of method Program::Main
Notice that main has 5 hardcoded as part of the call, and in Foo. The calling method is actually hardcoding the value that is optional! The value is at both the call-site and callee site.
You will be able to get at the optional value by using the form:
typeof(SomeClass).GetConstructor(new []{typeof(string),typeof(int),typeof(int)})
.GetParameters()[1].RawDefaultValue
On MSDN for DefaultValue (mentioned in the other answer):
This property is used only in the execution context. In the reflection-only context, use the RawDefaultValue property instead. MSDN
And finally a POC:
static void Main(string[] args)
{
var optionalParameterInformation = typeof(SomeClass).GetConstructor(new[] { typeof(string), typeof(int), typeof(int) })
.GetParameters().Select(p => new {p.Name, OptionalValue = p.RawDefaultValue});
foreach (var p in optionalParameterInformation)
Console.WriteLine(p.Name+":"+p.OptionalValue);
Console.ReadKey();
}
http://bartdesmet.net/blogs/bart/archive/2008/10/31/c-4-0-feature-focus-part-1-optional-parameters.aspx
A: DefaultValue of the ParameterInfo class is what you are looking for:
var defaultValues = typeof(SomeClass).GetConstructors()[0].GetParameters().Select(t => t.DefaultValue).ToList(); | unknown | |
d16813 | val | Do you mean you want to implement something yourself like a single server which controls the locks for each script?
All your others servers would have to ask it for 'permission' to run the script and then inform it when they are done, probably with some timeout check mechanism also. You would need to think about having some high availability mechanism to ensue your 'lock controller' server does not become a single point of failure for the entire system. Also, you may want to check if you will need to queue requests rather than just existing - even if it is not a requirement now, if it is likely to become one it might be easier to design for it from the start.
Some common approaches are listed in the answers to these question here - the questions are a bit old but I think still relevant:
Distributed Lock Service
What are some good ways to do intermachine locking? | unknown | |
d16814 | val | You can't say when the thread starts to run, it might not start until
after you return from main which means the process will end and the thread with it.
You have to wait for the thread to finish, with pthread_join, before leaving main.
The third case, with the message from the thread printed twice, might be because the thread executes, and the buffer is written to stdout as part of the end-of-line flush, but then the thread is preempted before the flush is finished, and then the process exist which means all file streams (like stdout) are flushed so the text is printed again.
A: For output 1:
your main function only create a pthread, and let it run without waiting for it to finish.
When your main function return, Operating system will collect back all the resources assigned to the pprocess. However the newly created pthread might have not run.
That is why you only got HERE.
For output 2:
your newly created thread finished before main function return. Therefore you can see both the main thread, and the created thread's output.
For output 3
This should be a bug in glibc. Please refer to Unexpected output in a multithreaded program for details.
To make the program always has the same output
pthread_join is needed after pthread_create | unknown | |
d16815 | val | It looks like you're mixing two different mechanisms?
One is of a parent -> child component relationship where you have your WorkDetailsComponent with an @Input() for prova, but at the same time, it looks like the component is its own page given your <a [routerLink]="['/details',item.id]"> and the usage of this.route.paramMap.subscribe....
Fairly certain you can't have it both ways.
You either go parent -> child component wherein you pass in the relevant details using the @Input()s:
<div *ngIf="selectedItem">
<app-work-details [prova]="prova" [selectedItem]="selectedItem"></app-work-details>
</div>
OR you go with the separate page route which can be done one of two ways:
*
*Use the service as a shared service; have it remember the state (prova) so that when the details page loads, it can request the data for the relevant id.
*Pass the additional data through the route params
For 1, it would look something like:
private prova: results[]; // Save request response to this as well
public getItem(id: number): results {
return this.prova.find(x => x.id === id);
}
And then when you load your details page:
ngOnInit():void {
this.route.paramMap.subscribe
(params => {
this.selectedId=+params.get('selectedId');
this.item = this.service.getItem(this.selectedId);
});
}
For 2, it involves routing with additional data, something like this:
<a [routerLink]="['/details',item.id]" [state]="{ data: {prova}}">..</a>
This article shows the various ways of getting data between components in better detail. | unknown | |
d16816 | val | Looking at the documentation "~all" is specifically mentioned as valid parameter value for listManagementProfiles:
Account ID for the view (profiles) to retrieve. Can either be a
specific account ID or '~all', which refers to all the accounts to
which the user has access.
but not for listManagementCustomDimensions, here is says simply
Account ID for the custom dimensions to retrieve.
(same for property id). So your problem is quite literally what the error message says, you cannot use "~all" when querying custom dimensions.
So it seems that to list all custom dimensions you'd have to iterate through a list of property ids (as returned by the properties/list method) instead of using "~all". | unknown | |
d16817 | val | You'd want the swizzle to happen as soon as the libraries are loaded. You can do that via +initialize, +load, or a constructor function.
@bbum's answer to this question has a bit more information, along with one of his blog posts on the caveats of using these special class methods.
(And I'm purposely not questioning the wisdom of what you're doing ;) )
A: You can use constructor functions like this:
__attribute__((constructor)) static void do_the_swizzles()
{
// Do all your swizzling here.
}
From GCC documentation:
The constructor attribute causes the function to be called
automatically before execution enters main().
Note: Although this is originally from GCC, it also works in LLVM. | unknown | |
d16818 | val | As per Getting Started with Headless Chrome to enable remote-debugging you can add the argument remote-debugging-port through Selenium::WebDriver::Chrome::Options.new which will help in:
Navigating to http://localhost:9222 in another browser to open the DevTools interface or use a tool such as Selenium to drive the headless browser.
options = Selenium::WebDriver::Chrome::Options.new
options.add_argument('--headless')
options.add_argument('--disable-gpu')
options.add_argument('--remote-debugging-port=9222')
@driver = Selenium::WebDriver.for(:chrome, options: options)
puts @driver.manage.logs.get :browser | unknown | |
d16819 | val | It is nothing mysterious, it is the batch dimension, since keras (and most DL frameworks), make computations on batches of data at a time, since this increases parallelism, and it maps directly to batches in Stochastic Gradient Descent.
Your layer needs to support computation on batches, so the batch dimension is always present in input and output data, and it is automatically added by keras to the input_shape.
A: This newly added dimension refers to the batch dimension, i.e., in your case, you will be passing batches of 3x3 dimensional tensors. This additional None dimension refers to the batch dimension, which is unknown during the creation of the graph.
If you have a look at the Input layer explanation in Core Layers webpage for Keras,
https://keras.io/layers/core/, you will see that the shape argument you are passing when creating the Input layer is defined as the following:
shape: A shape tuple (integer), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors.
A: If you want to specify the batch size, you can do this instead:
inputs = Input(batch_shape=(batch_size,height,width,channel)) | unknown | |
d16820 | val | Because...myApp is not defined. Add a var in front of it to declare it. Otherwise, you're relying on The Horror of Implicit Globals (which only "works" in loose mode anyway), so JSHint is quite rightly telling you not to do that.
A: The error describes the problem perfectly. myApp is not defined. Define it with
var myApp
A: First thing is you need to use var myApp there instead of myApp only.
Secondly you don't need to.
Even if you use var myApp, a global variable would be created. To avoid it, use angular.module('autoApp') (no second argument) wherever you want reference to myApp. | unknown | |
d16821 | val | No, logback doesn't support xz. You can see it in source for RollingPolicyBase.java and source for Comressor.java | unknown | |
d16822 | val | Your approach is deeply flawed. Every time you load your JSP, it creates another Jade container. When using Jade, you generally only want to have a single container on a single machine.
A better approach would be to start up your container when your entire application starts up. Then, when you want to trigger the Jade GUI, you should just start the RMA agent inside your Jade container, as described in this answer: https://stackoverflow.com/a/20462656/2047962
Now, let's take a step back and talk about your code. I would strongly recommend that you stop using scriptlets in your JSPs RIGHT NOW. It's a bad practice that leads to really poor designs. If you don't know the alternatives, please read "How to avoid Java code in JSP files?".
Second, why are you mixing web applications and desktop Swing UIs? Why are you opening a Swing GUI when someone accesses your web page in a browser? If you were to deploy that application, your clients would get to a page that says "JADE is running", but they wouldn't see anything. The GUI would open on the server, not the client. I can't see why you would want this functionality in a web environment. | unknown | |
d16823 | val | Problem solved after downgrade NuGet package version Xamarin.Firebase.Messaging and Xamarin.GooglePlayServices.Base to 42.1001.0
A: My solution was the other way round, I did not have Xamarin.Firebase.Messaging installed at first. But after installing it from Nuget, it works. | unknown | |
d16824 | val | This works:
<input type="text" id="mysource">
<input type="text" id="yoursource">
<input type="text" id="nope">
js (the context 'this' inside of the each function will be the matched elements):
$("input[type='text'][id$='source']").each(function() {
console.log(this);
});
console result:
<input type="text" id="mysource">
<input type="text" id="yoursource">
A: Clean up your quotes and you should be OK -
$('input[id$="Source"]').each(function() {
$('input[type="text"][id$="Source"]').each(function() {
Here is an example fiddle - http://jsfiddle.net/C5v4H/ | unknown | |
d16825 | val | So, I figured it out.
TLDR : It was caused by VSTS release task called "Azure App Service Deploy" that change the variables to string in the "File Transform & Variable Substitution Option" section.
After looking again at the deployed file I finaly saw that the file that is :
{
"isTestEnvironment": true,
}
in my sources had became :
{
"isTestEnvironment": "true",
}
on the azure server.
This caused every check we made against this supposedly boolean variable bogus, since a non-empty string is equivalent to true in js.
So I'm now using JSON.parse(myVariable) when I extract it from the env.json file.
I could reproduce it in local and tested it. It works.
To anyone using the same releas etask as we do, be careful that boolean will be changed into string. I checked our release pipeline variable and they are NOT declared with quotes around them; plain simple true/false values.
In any case, thank you guys for giving potential workarounds. | unknown | |
d16826 | val | Join the CROSS join of categories to sub_categories, so you get all the combinations of categories and subcategories, to the other 3 tables with LEFT joins and group by each combination and aggregate:
select c.category_name, sc.sub_category_name,
count(distinct p.podcast_id) podcast_count,
count(distinct v.video_id) videos_count,
count(distinct o.other_link_id) other_count
from categories c cross join sub_categories sc
left join podcasts p on (p.podcast_category_id, p.podcast_subcategory_id) = (c.category_id, sc.sub_category_id)
and p.podcast_owner = 14 AND p.podcast_upload_time_stamp >= timestamp '2020-10-22 00:00:00'
left join videos v on (v.video_category_id, v.video_subcategory_id) = (c.category_id, sc.sub_category_id)
and v.video_owner = 14 AND v.video_upload_time_stamp >= timestamp '2020-10-22 00:00:00'
left join otherlinks o on (o.other_link_category_id, o.other_link_subcategory_id) = (c.category_id, sc.sub_category_id)
and o.other_link_owner = 14 AND o.other_link_add_time_stamp >= timestamp '2020-10-22 00:00:00'
where coalesce(p.podcast_id, v.video_id, o.other_link_id) is not null
group by c.category_id, c.category_name, sc.sub_category_id, sc.sub_category_name
The WHERE clause filters out any combination of category and subcategory that does not contain any podcast, video or other link.
A: You can use UNION ALL and add a constant to distinguish them.
SELECT 'podcasts' as "rowtype", c.category_name, sc.sub_category_name, count(p.*) AS type_count
FROM podcasts p
JOIN categories c ON c.category_id = p.podcast_category_id
JOIN sub_categories sc ON sc.sub_category_id = p.podcast_subcategory_id
WHERE p.podcast_owner = 14 AND p.podcast_upload_time_stamp >= timestamp '2020-10-22 00:00:00'
GROUP BY 1, 2, 3
UNION ALL
SELECT 'videos' as "rowtype", c.category_name, sc.sub_category_name, count(o.*) AS type_count
FROM videolinks o
JOIN categories c ON c.category_id = o.other_link_category_id
JOIN sub_categories sc ON sc.sub_category_id = o.other_link_subcategory_id
WHERE o.other_link_owner = 14 AND o.other_link_add_time_stamp >= timestamp '2020-10-22 00:00:00'
GROUP BY 1, 2, 3
UNION ALL
SELECT 'other' as "rowtype", c.category_name, sc.sub_category_name, count(o.*) AS type_count
FROM otherlinks o
JOIN categories c ON c.category_id = o.other_link_category_id
JOIN sub_categories sc ON sc.sub_category_id = o.other_link_subcategory_id
WHERE o.other_link_owner = 14 AND o.other_link_add_time_stamp >= timestamp '2020-10-22 00:00:00'
GROUP BY 1, 2, 3
I want to note that even just union might work, but union all is more often than not the one that gives the desired result. | unknown | |
d16827 | val | You can just use the FirePath plugin that mozilla has.
Or if not then for the following code
<html>
<body>
<p>
<span>Welcome<span>
Hello welcome.
<p>
<p>
<span>Welcome!!!<span>
Good morning
<p>
</body>
</html>
The xpath for second p element will be : html/body/p[2]
For the span within it: html/body/p[2]/span
A: Use // slash to find all the elements present on the DOM with start with p tag
//p will return you all element present on the DOM in p tag
Hope it will help you :) | unknown | |
d16828 | val | The command you are looking for is hilomap. For example, to map to TIEHI and TIELO cells with Y outputs use something like:
hilomap -hicell TIEHI Y -locell TIELO Y
This will create an individual TIEHI/TIELO cell for each constant bit in the design. Use the option -singleton to only create single TIEHI/TIELO cells with a higher fan-out. | unknown | |
d16829 | val | I fixed the issue like this.
columnDefs: [
{
"targets": [],
"render": function(data, type, row) {
row[5] = row[2] * $(row[3]).val();
return row[5];
}
}]
what I did previously was like:
columnDefs: [
{
"targets": [],
"render": function(data, type, row) {
value = row[2] * $(row[3]).val();
return value;
}
}]
here new value is returned and it will be visible in table but the row data wont be updated. If we update row[] array we will get the update in row().data(). | unknown | |
d16830 | val | You'll be needing to create separate folders for strings.xml that contains the different languages. You can see/read this from the How to Create Alternative Resources guide:
*
*res/values/strings.xml Contains English text for all the strings that the application uses, including text for a string named title.
*res/values-fr/strings.xml Contain French text for all the strings, including title.
*res/values-ja/strings.xml Contain Japanese text for all the strings except title.
If your Java code refers to R.string.title, here is what will happen
at runtime:
If the device is set to any language other than French, Android will
load title from the res/values/strings.xml file. If the device is set
to French, Android will load title from the res/values-fr/strings.xml
file.
Check this Multilingual app tutorial for a hands-on experience. | unknown | |
d16831 | val | It is already in the results since you are using a * in the query.
echo $row['fldFName'] . ' ' . $row['fldSName'] . '<br />'; | unknown | |
d16832 | val | According to the Add-on manifest documentation, the mentioned manifest field is meant to store your add-on name.
name: Required. The name of the add-on shown in the toolbar.
Since this field is meant for the name of your add-on, it sounds like expected behavior not being able to update it on runtime.
Here is an open Feature Request ticket with Google to add support for this functionality. | unknown | |
d16833 | val | Just a note for next time - because you didn't detail the problem, it was hard to figure out what you mean. Anyhow:
in order to do what you asked you need to:
a) read the data from the file
b) split the data based on the character which is between the cells.
In C++, The split string algorithm is in boost - if you dont know what that is, make sure you take a look in here: http://www.boost.org/
Soltion:
I`m modifying various cPlusPlus guides here to fit your purpouse:
#include <sstream>
#include <iostream>
#include <fstream>
#include <vector>
#include <boost/algorithm/string/split.hpp>
#include <boost/algorithm/string/classification.hpp>
using namespace std;
vector<string> getData (string filePath) {
vector<string> Cells; // In the end, here we will store each cell's content.
stringstream fileContent(""); // This is a string stream, which will store the database as a string.
ofstream myfile; // the file which the database is in
myfile.open (filePath); // Opening the file
while ( getline (myfile,line) ) // Reading it until it's over
{
fileContent << line; // adding each line to the string
}
split(Cells, fileContent.str(), is_any_of(" "));// Here, insert the char which seperates the cells from each other.
myfile.close()
return Cells; // returning the split string.
}
Hope i helped :) | unknown | |
d16834 | val | Option 1: Use AngularJS run
// Calling the rootScope to handle the ondevice ready
angular.module('myApp').run(['$rootScope', function($rootScope) {
document.addEventListener('deviceready', function() {
$rootScope.$apply(function() {
$rootScope.myVariable = "variable value";
try {
$rootScope.uuid = device.uuid; //always use device object after deviceready.**
alert($rootScope.uuid);
//onapploginx(uuid);
} catch (e) {
alert(e);
}
// Register the event listener
document.addEventListener("backbutton", onBackKeyDown, false);
});
});
}]);
Option 2: Use JS only
define the onLoad in the body
<body id="main_body" ng-app='myApp' ng-controller='DemoController' onload="onLoad()">
call addEventListener for device ready and then call the function of onDeviceReady
function onLoad() {
console.log("i am onload");
document.addEventListener("deviceready", onDeviceReady, false);
}
// device APIs are available
//
function onDeviceReady() {
try {
var uuid = device.uuid; //always use device object after deviceready.**
alert("uuidx:",uuid);
} catch (e) {
alert(e);
}
} | unknown | |
d16835 | val | there are some solutions
*
*npm install [email protected]
*npm audit fix (this one requires you to have a package.json file [get that by npm init])
tell me if one works
OR
list the problems that you face
A: Now when i am using npm install redux everythink looks fine but, this is my package.json and i dont have 'redux' in dependencies. When i wanna use import { createStore } from 'redux';
my Webstorm dont see redux package.
{
"name": "my-app",
"version": "0.1.0",
"private": true,
"dependencies": {
"react": "^16.13.1",
"react-dom": "^16.13.1",
"react-redux": "^7.2.1",
"react-scripts": "0.9.5"
},
"devDependencies": {
"gulp": "^4.0.2"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test --env=jsdom",
"eject": "react-scripts eject"
},
"description": "This project was bootstrapped with [Create React App](https://github.com/facebookincubator/create-react-app).",
"main": "index.js",
"author": "",
"license": "ISC"
} | unknown | |
d16836 | val | The easiest approach is to loop through the file by lines. Something like this:
package main
import (
"bufio"
"fmt"
"log"
"strconv"
"strings"
)
type Student struct {
FirstName string
LastName string
}
func main() {
fmt.Println("What is the name of your file?\n") var filename string
fmt.Scan(&filename)
file, err := os.Open(filename)
if err != nil {
log.Fatal(err)
}
scanner := bufio.NewScanner(file)
for scanner.Scan() {
line := scanner.Text()
if len(line) == 0 {
// skip blank lines
continue
}
if '0' <= line[0] && line[0] <= '9' {
sum := 0
for _, field := range strings.Fields(line) {
n, err := strconv.Atoi(field)
if err != nil {
log.Fatal(err)
}
sum += n
}
fmt.Println(sum)
} else {
fields := strings.Fields(line)
if len(fields) != 2 {
log.Fatal("don't know how to get first name last name")
}
fmt.Println("First:", fields[0], "Last:", fields[1])
}
}
if err := scanner.Err(); err != nil {
log.Fatal(err)
}
}
See it on the playgrOund. | unknown | |
d16837 | val | You have to write a custom initializer to handle the cases, for example
struct Thing : Decodable {
let scanCode, name, scanId : String
private enum CodingKeys: String, CodingKey { case scanCode = "ScanCode", name = "Name", ScanID, Id }
init(from decoder: Decoder) throws {
let container = try decoder.container(keyedBy: CodingKeys.self)
scanCode = try container.decode(String.self, forKey: .scanCode)
name = try container.decode(String.self, forKey: .name)
if let id = try container.decodeIfPresent(String.self, forKey: .Id) {
scanId = id
} else {
scanId = try container.decode(String.self, forKey: .ScanID)
}
}
}
First try to decode one key, if it fails decode the other.
For convenience I skipped the attributes key | unknown | |
d16838 | val | Try the python csv library
import csv
with open('myfile_0.csv', 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter=' ', quotechar='|')
for row in reader:
print ', '.join(row)
# output:
# Time, InfD, Com, ComN
# 0, 3, 4, 0
# 1, 2, 5, 1 | unknown | |
d16839 | val | To run the module as script directly use:
python3 -m partition
(without the .py ending).
That will cause python to search sys.path for a module called partition and execute it. partition.py in this context would mean a module in a file partition/py.py.
A: See the doc, specifically that the module must be on the path, and the extension shouldn't be included. | unknown | |
d16840 | val | If you didn't preload the images, you could see this kind of problem.
To preload the images, just add a special div with the wanted urls.
<div id="preload">
<img src="/path/to/my/image.png" alt="">
<img src="/img2.gif" alt="">
</div>
In your css:
#preload { display:none; }
That's it ! | unknown | |
d16841 | val | The symbol for your enum concept can be the string identifier in your restDb. Here's one pattern:
Modify your existing enum to follow this format
enum (AltBrainsNames) {
description (AltBrainsNames Identifiers)
symbol (insideTheHelmut)
symbol (impeachmentSage)
symbol (iranConflictTracker)
symbol (historicalCarbonDioxideEmissions)
symbol (picard)
symbol (quotationBank)
symbol (USElections)
}
Your tuple to connect the user-friendly name to the identifier.
structure (NameSelection) {
property (name) {
type (AltBrainsNames)
min (Required) max (One)
}
property (title) {
type (core.Text)
min (Required) max (One)
visibility (Private)
}
}
Get a list of names
action (GetAltBrainsNames) {
type(Constructor)
output (NameSelection)
}
Provide a list of names, and prompt the user to select one
action (MakeNameSelection) {
type(Calculation)
collect {
input (selection) {
type (NameSelection)
min (Required) max (One)
default-init {
intent {
goal: GetAltBrainsNames
}
}
}
}
output (AltBrainsNames)
}
Your vocabulary can support the user saying synonyms for symbol
vocab (AltBrainsNames) {
"insideTheHelmut" { "insideTheHelmut" "inside the helmut" "helmut"}
"impeachmentSage" { "impeachmentSage" "impeachment sage" "impeachment" "sage"}
"iranConflictTracker" {"iranConflictTracker" "iran conflict tracker"}
"historicalCarbonDioxideEmissions" { "historicalCarbonDioxideEmissions" "historical carbon dioxide emissions"}
"picard" { "picard"}
"quotationBank" {"quotationBank" "quotation bank" "quotations"}
"USElections" {"USElections" "us elections" }
} | unknown | |
d16842 | val | You could add a parameter for the BundleContext to your interface methods. Then, when the client code calls into your service, passing in its bundle context, you can call context.getBundle().getSymbolicName() or other methods to get information about the bundle from which the call came.
A: The correct way to do this is to use a ServiceFactory, as explained in the OSGi specification. If you register your service as a service factory, you can supply an implementation for each "client" (where "client" is defined as bundle, invoking your service). This allows you to know who is invoking you, without the client having to specify anything as it's clearly not good design to add a parameter called BundleContext (unless there is no other way).
Some "pseudo" code:
class Bundle_C_Activator implements BundleActivator {
public void start(BundleContext c) {
c.registerService(ServiceInterface.class.getName(),
new ServiceFactory() {
Object getService(Bundle b, ServiceRegistration r) {
return new ServiceImpl(b); // <- here you hold on to the invoking bundle
}
public void ungetService(Bundle b, ServiceRegistration r, Object s) {}
}, null);
}
}
class ServiceImpl implements ServiceInterface {
ServiceImpl(Bundle b) {
this.b = b; // <- so we know who is invoking us later
}
// proceed here with the implementation...
} | unknown | |
d16843 | val | TL;DR:
*
*Controller == Works on vanilla K8s resources
*Operator == a Controller that adds custom resources (CRDs) required for it's operation
Change my mind but in my opinion the difference is negligible and the terms rather confuse people then actually adding value to a discussion. I therefore would use them interchangeablely.
A: In Kubernetes, most of the operations happen in an asynchronous manner.
For instance, when one creates a ReplicaSet object (picking a simpler object), this is the sequence that happens:
*
*We send the request to the Kube api-server.
*The kube-api server has a complex validation
*
*Ensures that the user has the RBAC credential to create the RS in the given namespace
*The request is validated by all the configured admission controllers
*Finally the object is just written to ETCD - nothing more nothing less
Now, it is the responsibility of the various Kubernetes controllers to watch the ETCD changes and actually execute the necessary operations. In this case, the ReplicaSet controller would be watching for the changes in ETCD (e.g. CRUD of ReplicataSets) and would create the Pods as per the replica count etc.
Now, coming to Operators, conceptually they are very similar to Kubernetes controllers. But they are used with third-party entities. In Kubernetes, there is a concept of CRDs, where vendors can define their own CRD which is nothing but a custom (e.g. Vendor specific) kubernetes object type. Very similar to the manner in which Kubernetes controllers read to the CRUD of Kubernetes objects, these operators respond to the operations on the corresponding CRDs. E.g. Kong operator can create new API entries in the Kong API server when a new API CRD object is created in the Kubernetes cluster.
A: I believe the term "kubernetes operator" was introduced by the CoreOS people here
An Operator is an application-specific controller that extends the Kubernetes API to create, configure and manage instances of complex stateful applications on behalf of a Kubernetes user. It builds upon the basic Kubernetes resource and controller concepts, but also includes domain or application-specific knowledge to automate common tasks better managed by computers.
So basically, a kubernetes operator is the name of a pattern that consists of a kubernetes controller that adds new objects to the Kubernetes API, in order to configure and manage an application, such as Prometheus or etcd.
In one sentence: An operator is a domain specific controller.
Update
There is a new discussion on Github about this very same topic, linking to the same blog post. Relevant bits of the discussion are:
All Operators use the controller pattern, but not all controllers are Operators. It's only an Operator if it's got: controller pattern + API extension + single-app focus.
Operator is a customized controller implemented with CRD. It follows the same pattern as built-in controllers (i.e. watch, diff, action).
Update 2
I found a new blog post that tries to explain the difference as well.
A: Controllers are innate objects to kubernetes that follows the control loop theory and ensures desired state matches the actual. ReplicatSet, daemonset, replication they all are pre-configured/pre-installed controllers
Operators also have controllers. Operators are a means to customize or extend the functionality of kubernetes by means of CRD (Custom Resource Definition). For eg, if you have a need to auto-inject specialized monitoring or initilization container, when a new app pod is created, then you will need write some customization (operators) as this functionality is not available in kubernetes.
Operators can be written in any language with ability to communicate with Kubernetes API server; I have mostly seen them written in Golang. | unknown | |
d16844 | val | Whilst I think the use of nameof is clever, I can think of a few scenarios where it might cause you a problem (not all of these might apply to you):
1/ There are some string values for which you can't have the name and value the same. Any string value starting with a number for example can't be used as a name of a constant. So you will have exceptions where you can't use nameof.
2/ Depending how these values are used (for example if they are names of values stored in a database, in an xml file, etc), then you aren't at liberty to change the values - which is fine until you come to refactor. If you want to rename a constant to make it more readable (or correct the previous developer's spelling mistake) then you can't change it if you are using nameof.
3/ For other developers who have to maintain your code, consider which is more readable:
public const string ConstantA = nameof(ContantA);
or
public const string ConstantA = "ConstantA";
Personally I think it is the latter. In my opinion if you go the nameof route then that might give other developers cause to stop and wonder why you did it that way. It is also implying that it is the name of the constant that is important, whereas if your usage scenario is anything like mine then it is the value that is important and the name is for convenience.
If you accept that there are times when you couldn't use nameof, then is there any real benefit in using it at all? I don't see any disadvantages aside from the above. Personally I would advocate sticking to traditional hard coded string constants.
That all said, if your objective is to simply to ensure that you are not using the same string value more than once, then (because this will give you a compiler error if two names are the same) this would be a very effective solution.
A: I think nameof() has 2 advantages over a literal strings:
1.) When the name changes, you will get compiler errors unless you change all occurences. So this is less error-prone.
2.) When quickly trying to understand code you didn't write yourself, you can clearly distinguish which context the name comes from. Example:
ViewModel1.PropertyChanged += OnPropertyChanged; // add the event handler in line 50
...
void OnPropertyChanged(object sender, string propertyName) // event handler in line 600
{
if (propertyName == nameof(ViewModel1.Color))
{
// no need to scroll up to line 50 in order to see
// that we're dealing with ViewModel1's properties
...
}
}
A: Using the nameof() operator with public constant strings is risky. As its name suggests, the value of a public constant should really be constant/permanent. If you have public constant declared with the nameof() and if you rename it later then you may break your client code using the constant. In his book Essential C# 4.0, Mark Michaelis points out: (Emphasis is mine)
public constants should be permanent because changing their value will
not necessarily take effect in the assemblies that use it. If an
assembly references constants from a different assembly, the value of
the constant is compiled directly into the referencing assembly.
Therefore, if the value in the referenced assembly is changed but the
referencing assembly is not recompiled, then the referencing assembly
will still use the original value, not the new value. Values that
could potentially change in the future should be specified as readonly
instead. | unknown | |
d16845 | val | Spring is injecting an instance of TestService with a DAO, but that instance isn't the one that requests are going to. You're using Jersey's ServletContainer to host your Jersey app, which doesn't integrate with Spring in any way. It'll be creating instances as needed all on its own, which obviously won't be injected by Spring (without doing some bytecode weaving, anyway). I'd recommend using the SpringServlet, which is a ServletContainer that knows how to get resource classes from a Spring context. That'll clear up your problem.
A: Same as Ryan pointed out - your ServletContainer servlet doesn't know about Spring container, so your @Resource/@Autowired never gets dependency injected.
Use SpringServlet instead, either by adding it to web.xml`, or adding it in your Spring WebInitializer, not both. See examples below.
Here's code example for web.xml:
<servlet>
<servlet-name>jersey-spring</servlet-name>
<servlet-class>com.sun.jersey.spi.spring.container.servlet.SpringServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>jersey-spring</servlet-name>
<url-pattern>/resources/*</url-pattern>
<load-on-startup>1</load-on-startup>
<init-param>
<param-name>com.sun.jersey.config.property.packages</param-name>
<param-value>phonebook.rest</param-value>
</init-param>
</servlet-mapping>
Here's code example for your custom WebInitializer:
public class PhonebookApplicationWebInitializer implements WebApplicationInitializer {
@Override
public void onStartup(ServletContext container) throws ServletException {
AnnotationConfigWebApplicationContext factory = new AnnotationConfigWebApplicationContext();
// factory.scan("phonebook.configuration");
factory.register(PhonebookConfiguration.class);
ServletRegistration.Dynamic dispatcher = container.addServlet("jersey-spring", new SpringServlet());
dispatcher.setLoadOnStartup(1);
dispatcher.addMapping("/resources/*");
dispatcher.setInitParameter("com.sun.jersey.config.property.packages", "phonebook.rest");
container.addListener(new ContextLoaderListener(factory));
}
}
You can see some a nice example on Spring+Jersey integration here:
http://www.mkyong.com/webservices/jax-rs/jersey-spring-integration-example/ | unknown | |
d16846 | val | this is a valid problem, with this markup
<form>
<div>
Male:
<input type="radio" name="gender" value="male" />
Female:
<input type="radio" name="gender" value="female" /></div>
<br />
<div>
Pregnant:
<input type="radio" name="pregnant" value="yes" />
Not Pregnant:
<input type="radio" name="pregnant" value="no" /></div>
<br />
<input type="submit" />
</form>
and this script
$('[name=gender]').click(function () {
var disabled = $(this).val() === 'male';
$('[name=pregnant]').attr('disabled', disabled);
// UNCOMMENT THIS TO FIX
// if ($('[name=pregnant]').hasClass('error')) {
// $('form').validate().form();
// }
});
$('input').addClass('required');
$('form').validate({
ignore: ":disabled",
errorPlacement: function (error, element) {
error.appendTo(element.parent("div"));
},
submitHandler: function () {
alert('form is ok');
}
});
to reproduce the problem:
*
*click female
*click submit
*error message shows - this is correct
*click male - error message still shows
*click submit - form is valid
so as @ioan says you need to try and get rid of this error message as it is confusing, even though the form is valid the error message is still there
*
*problem - http://jsfiddle.net/YzCHJ/4/
*a possible solution - http://jsfiddle.net/YzCHJ/5/
A: The .validate() ignore: option works as follows:
ignore, Default: ":hidden"
Elements to ignore when validating, simply filtering them out. jQuery's not-method is
used, therefore everything that is accepted by not() can be passed as this option.
http://docs.jquery.com/Plugins/Validation/validate#toptions
Working Demo: http://jsfiddle.net/us9aE/
HTML:
<form id="myform">
<input type="text" name="field" disabled="disabled" /> <br/>
</form>
jQuery:
$(document).ready(function() {
$('#myform').validate({
ignore: ":disabled"
});
});
If you'd like more specific help, please show your relevant HTML, your jQuery, and optionally create a jsFiddle demo.
A: The disabled fields are ignored during the validation phase BUT if the validation phase comes into action before the field has been disabled, the error message will still be there.
You can try to simply remove the validation message by cleaning the validation errors and re-validate the form. | unknown | |
d16847 | val | I think Rack is the answer to this. You should be able to intercept the request and alter the incoming parameters before the request hits your Rails stack.
Why not change the route to use the correct controller in the first place? | unknown | |
d16848 | val | Have you read the Parse documentation on Cloud Code ?
The first line of your code is only relevant when you are initialising Parse JavaScript SDK in a web page, you do not need to initialise anything in Parse cloud code in the main.js file. Also you cannot use a query to save/update an object. A query is for searching/finding objects, when you want to save or update an object you need to create a Parse.Object and save that.
So you code should become something like:
Parse.Cloud.afterSave("match_status", function(request) {
var wallet = new Parse.Object('Wallet');
wallet.set("wallet_coines_number", 200);
wallet.set("objectId", "FrbLo6v5ux");
wallet.save();
}); | unknown | |
d16849 | val | you should be insert value for Cust_ID because it not identity and it primary key..
or you alter Cust_ID to identity | unknown | |
d16850 | val | Based on the information you supplied it sound like docType is a good property to use as partition key, since it can be used to avoid cross partition queries. Especially since you state this will be often be used in your queries. With the max size you stated it will also be unlikely to cause you issues as a single partition can contain up to 20GB of data.
One thing to watch out for is Hot Partitioning. You state that your users partition might be a lot bigger than others. That can result into one partition doing all of the lifting while the others sit mostly idle which results and causing inefficiëncy of your total throughput.
On the other side it won't really matter for your use case. Since none of the databases will exceed that 5GB you'll always stay within a single partition, but it's always good though to think about it beforehand; As situations may change and you end up with a database that does split into partitions.
Lastly I would never use a single partition for all data. It has no benefits. If you have no properties that could serve as partition key then id is the better choice (so a logical partition per document). It won't hit storage limitations and evenly distributes throughput between partitions.
A: I would highly recommend you first take a look at this segment of the Data Modelling & Partitioning presentation by Thomas Weiss, Cosmos DB program manager. In my view it's one of the best resources to understand how to think about partitioning.
Do agree with David Makogon that you didn't provide enough data. For instance, we know there are 30 doc types per single database - given cosmosdb database uses containers, I actually expect each docType to have its own container - contrary to what you wrote:
Would it make more sense to use customerId (so all data is in one partition) or docType as a partition key?
Which suggests you want to use a single container for all your data. I wouldn't keep users and employees as documents in the same container. They are separate domains and deserve their own container.
See Azure docs page on Partition Strategy and subsequent paragraph about access patterns. The recommendation is to:
Choose a partition key that enables access patterns to be evenly spread across logical partitions.
In the access patterns section, the good practice mentioned is to separate data into hot, medium and cold data and place it into their own containers. One caveat is, that according to this page the max number of containers per database with shared throughput is 25.
If that is not possible, and all data has to end up in a single container, then docType seems to be the right partition key, because your queries will get data by docType if I understood correctly.
As 404 wrote, you want to avoid Hot Partitioning i.e. jamming most of documents in a container into a single or a few logical partitions. Therefore you want to choose a partition key based on most frequent operations. | unknown | |
d16851 | val | I managed to find a solution.
First declareda label.
#define ID_LABEL 1
static HWND myLabel;
Then created it.
case WM_CREATE:
myLabel = CreateWindow(TEXT("BUTTON"),TEXT("hello"),
WS_VISIBLE|WS_CHILD,50,50,150,25,
hwnd,(HMENU) ID_LABEL,NULL,NULL);
break;
And then for each button pressed/released, I edit the text. Example for when i press the left button of my mouse.
case WM_LBUTTONDOWN:
myLabel = CreateWindow(TEXT("BUTTON"),TEXT("left button pressed"),
WS_VISIBLE|WS_CHILD,50,50,150,25,
hwnd,(HMENU) ID_LABEL,NULL,NULL);
break;
And it's working. Is anyway a better way to do it ? | unknown | |
d16852 | val | Instead of calling the sendTemplate() function I should have used
$mandrill->messages->send($message, $async=false, $ip_pool=null, $send_at=null);
Once I changed the function call the mail was sent. | unknown | |
d16853 | val | try this:
The focus event is sent to an element when it gains focus. This event is implicitly applicable to a limited set of elements, such as form elements (<input>, <select>, etc.) and links (<a href>).
$('#ooo').bind('focus click', function () {
$('#kkk').text('hello');
});
$('#ooo').blur( function () { // you can use `blur` handler
$('#kkk').empty();
});
http://jsfiddle.net/gvQYh/1/
A: After a little wrangling I have a solution :)
$('#ooo').focus(function () {
$('#kkk').text('hello');
visible = true;
});
$('* :not(#ooo)').focus( function() {
if (visible) {
$('#kkk').empty();
visible = false;
}
});
http://jsfiddle.net/nosfan1019/3nK84/ | unknown | |
d16854 | val | I do not really understand what you want to achieve. However If you want to remove the select color from the available colors you just have to something like this:
public onSelectColor(row, color) {
row.color = color;
this.availableColors = this.availableColors.filter(c => c !== color);
}
That will change all select elements rerendering them without the removed color.
If you don not want to remove the color from all the existing selects, then for each select you should have a different array of available colors. I would make the row a angular component itself.
constructor(private _cr: ComponentFactoryResolver, private _viewContainerRef: ViewContainerRef) { };
public onSelectColor(row, color) {
//workout the new colors
const newAvailableColors = this.availableColors.filter(c => c !== color);
// Create new row component on the fly
const row: ComponentRef<RowComponent> =
this._viewContainerRef.createComponent(
this._cr.resolveComponentFactory(RowComponent)
);
//Add whatever you want to the component
row.instance.addColors(newAvailableColors);
row.instance.addSelectedColor(color);
}
The child component should emit an event when that select changes. So the child should have something like this:
@Output()
public changed = new EventEmitter();
and then when select has change emit the event with the color as you say in the comment. Something like this:
this.changed.emit({ color: color, row: row });
And then the parent, when is placing the child component in the html will be able to catch the event. Something like that:
(changed)="onSelectColor($event)"
Obviously, the onSelectColor has to change its arguments. Now it should be accepting the event. (Create an interface for the event, instead of using any)
public onSelectColor(event: any) {
//Do the same but retrieving info from the event
event.color;
event.row;
} | unknown | |
d16855 | val | On Windows, Unicode output to console doesn't work by default, even if you use std::wcout.
To make it work, insert the following line at the beginning of your program:
_setmode( _fileno(stdout), _O_U16TEXT );
_setmode and _fileno are Microsoft specific function.
You may also have to change console font. I'm using Lucida Console which works fine for cyrillic letters.
Complete example:
#include <iostream>
#include <io.h> // _setmode()
#include <fcntl.h> // _O_U16TEXT
int main()
{
// Windows needs a little non-standard magic for Unicode console output.
_setmode( _fileno(stdout), _O_U16TEXT );
std::wcout << L"по русски\n";
}
Example should be saved as UTF-8 encoded file because of the Unicode string literal, but this is not relevant in your case because you don't have Unicode string literals.
I have successfully tested this code under MSVC2015 and MSVC2017 on Win10. | unknown | |
d16856 | val | You got the property of type T and the return value should also be of type T? I don't believe that.
Maybe this will help:
var getValue = GetPrivateProperty<bool>(class, "BoolProperty");
public static T GetPrivateProperty<T>(object obj, string name)
{
BindingFlags flags = BindingFlags.Instance | BindingFlags.NonPublic;
PropertyInfo field = null;
var objType = obj.GetType();
while (objType != null && field == null)
{
field = objType.GetProperty(name, flags);
objType = objType.BaseType;
}
return (T)field.GetValue(obj, null);
}
Please see the changes of <BaseClass> to <bool> and typeof(T).GetProperty to obj.GetType().GetProperty. | unknown | |
d16857 | val | Your code works just fine. If my understanding is correct and all you need is just converting the output value (string separated by "|") to an array, then you can do it just by
Local $arr = StringSplit($var, "|") | unknown | |
d16858 | val | Make sure image file is present at correct location. It should be under src/images folder.
You can try any one based on image location.
// Read from same package
ImageIO.read(getClass().getResourceAsStream("folder63.png"));
// Read from images folder parallel to src in your project
ImageIO.read(new File("images/folder63.jpg"));
// Read from src/images folder
ImageIO.read(getClass().getResource("/images/folder63.png"))
// Read from src/images folder
ImageIO.read(getClass().getResourceAsStream("/images/folder63.png"))
Read more...
It's worth reading Java Tutorial on Loading Images Using getResource
A: Try this
InputStream input = classLoader.getResourceAsStream("image.jpg"); | unknown | |
d16859 | val | I'd suggest:
$('.tab').find('.accordion:first');
JS Fiddle proof-of-concept.
A: Try this:
$('.accordion:first', '.tab') | unknown | |
d16860 | val | You appear to be mixing server side and client side code.
The s:property tags will be evaluated first on the server side, long before any value of m is valid, as that is client side JavaScript code.
If you post what you're trying to achieve then I or someone else may be able to help further.
HTH | unknown | |
d16861 | val | 100m (milli) means a loadavg of 0.1: your server is pretty fine! | unknown | |
d16862 | val | A simpler and more readable solution would be the following:
def gt(lst, n):
return max(lst) > n
A: using a one liner
def gt(nums, n):
return any(e > n for e in nums)
this breaks when the first element bigger than n is found.
A: Long-ish response to Niklas B.'s comment:
I decided to test this, and here are the results. Blue dots are your function, green are Mario's; y axis is runtime in seconds, x axis is len(nums).
As you said, both are O(n). Yours is faster up to about 45 items; for anything over 100 items, his is roughly twice as fast.
It's mostly irrelevant - this seems to be more of a beginner syntax question than anything else - and, as you say, Python isn't a speed demon to begin with. On the other hand, who doesn't like a bit more speed (so long as readability doesn't suffer)?
For those interested, here's the code I wrote to test this:
from random import randint
from timeit import Timer
import matplotlib.pyplot as plt
def gt1(nums, n):
# based on Niklas B.'s answer - NOTE comparison is corrected
return n < max(nums)
def gt2(nums, n):
# based on Mario Fernandez's answer
return any(e > n for e in nums)
def make_data(length, lo=0, hi=None):
if hi is None:
hi = lo + length - 1
elif lo > hi:
lo,hi = hi,lo
return [randint(lo, hi) for i in xrange(length)]
def make_args(d):
nums = make_data(d)
n = randint(0,d)
return "{}, {}".format(nums, n)
def time_functions(fns, domain, make_args, reps=10, number=10):
fns = [fn.__name__ if callable(fn) else fn for fn in fns]
data = [[] for fn in fns]
for d in domain:
for r in xrange(reps):
args = make_args(d)
for i,fn in enumerate(fns):
timer = Timer(
setup='from __main__ import {}'.format(fn),
stmt='{}({})'.format(fn, make_args(d))
)
data[i].extend((d,res) for res in timer.repeat(number=number))
return data
def plot_data(data, formats=None):
fig = plt.figure()
ax = fig.add_subplot(111)
if formats is None:
for d in data:
ax.plot([x for x,y in d], [y for x,y in d])
else:
for d,f in zip(data, formats):
ax.plot([x for x,y in d], [y for x,y in d], f)
plt.show()
def main():
data = time_functions([gt1, gt2], xrange(10, 501, 10), make_args)
plot_data(data, ['bo', 'g.'])
if __name__=="__main__":
main()
A: This could be better
def gt(nums, n):
for c in nums:
if n < c:
return True
return False | unknown | |
d16863 | val | Hibernate-specific @Immutable annotation which I use in the following code snippet.
View can be mapped as like below....
@Entity
@Immutable
public class <NAME>View {
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
@Column(name = "id", updatable = false, nullable = false)
private Long id;
@Version
@Column(name = "version")
private int version;
The @Immutable annotation tells Hibernate to ignore all changes on this entity, but you can use it to retrieve data from the database.
List<View> views = em.createQuery("SELECT v FROM View v", View.class).getResultList(); | unknown | |
d16864 | val | Just displaying the particular emoji you want is pretty simple, using the INDEX function.
Proxy Date Time Proxy Score Broker Score Staff Name Emoji
12/1/2018 9:24 2 3 Alan Ball
12/1/2018 11:03 3 2 Adam O'Tough
12/1/2018 11:44 2 1 Brian King ✔
So let's say you have the emoji definitions in a range, $C$3:$G$7:
A B C D E F G
----------------------------------------------------------
1 | Real
2 | 1 2 3 4 5
3 | P 1 ✔ ✔
4 | r 2 ✔ ✔
5 | o 3 ✔
6 | x 4 ✔ ✔
7 | y 5 ✔ ✔
Now let's assume your data is in a table. To get the appropriate emoji, you'd just use a formula like this: =INDEX($C$3:$G$7,[@[Proxy Score]],[@[Broker Score]]) | unknown | |
d16865 | val | You need new feature values to make predictions.
For example:
svmRbftune <- train(unemploy ~ pce + pop + psavert + uempmed,
data = head(EconomicsTrain,-3), method = "svmRadial",
tunelength = 14, trControl = trainControl(method = "cv"))
svmRbfPredict <- predict(svmRbftune, tail(EconomicsTest,3)) | unknown | |
d16866 | val | You aren't setting the history on your Router correctly
<Router history={history} /> | unknown | |
d16867 | val | Personally, I like having clean isolated small little Partial Views especially if it is going to be regular HTTP POST.
However, based on the assumptions I am making below, I think I can suggest a better implementation design.
My Assumption
You have
*
*Index.cshtml Parent view to display a list Users.
*JSON object array containing your list of Users
*Based on what I see, you are using KnockoutJS.
Read the KnockoutJS Template Binding especially the "Note 5: Dynamically choosing which template is used" part.
It kind of makes it easier to do what you are doing if you are using KnockoutJS or something similar.
You simply have toggle between the two rendering templates.
<script type="text/html" id="gallery-layout-template"> ... </script>
<script type="text/html" id="listing-layout-template"> ... </script>
<div id="divOutputContainer"
data-bind="template: { name: displayTemplate, foreach: users }"></div>
<script type="text/javascript">
$(document).ready(function() {
// I am just writing out a dummy User array here.
// Render out your User Array JSON encoded using JSON.NET.
var myUsers = [
{ "id" : 1, "name": "User 1" },
{ "id" : 2, "name": "User 2" }
];
// here is your KnockoutJS View Model "class"
function MyKoViewModel(users) {
var self = this;
self.users = ko.observableArray(users);
// Toggle this renderMode observable's value
// between 'listing' and 'gallery' via your Toggle button click event handler
self.renderMode = ko.observable( 'gallery' );
self.displayTemplate = function(user) {
// this will return 'gallery-layout-template' or 'listing-layout-template'
return self.renderMode() + '-layout-template';
}
}
ko.applyBindings( new MyKoViewModel( myUsers ) );
});
</script>
So with that technique, you don't need to make an AJAX call every time to refresh the view with a different rendering template.
You have all your data that you want to display as a client-side JavaScript KnockoutJS view model.
Then, just switch the client-side rendering template using KnockoutJS.
Must more efficient :-)
NOTE
I have a feeling, you might have to use the ko.computed() for the MyKoViewModel's displayTemplate() function like this.
self.displayTemplate = ko.computed(function() {
return self.renderMode() + '-layout-template';
} | unknown | |
d16868 | val | Block.setType by default updates neighboring blocks, which, if happening near a yet unpopulated chunk, causes it to populate as well, and so on. This needs to be turned off with the second boolean parameter: setType(Material.STONE, false). | unknown | |
d16869 | val | Use str.contains with \d for match number and filter by boolean indexing:
df = df[df.column1.str.contains('\d')]
print (df)
column1
1 How are you ? 123 a45
2 123444234324!!! (This is also string)
3 sdsfds sdfsdf 233423
EDIT:
print (df)
column1
0 hi all, i am fine d78
1 How are you ? 123 a45
2 123444234324!!!
3 sdsfds sdfsdf 233423
4 adsfd xcvbb cbcvbcvcbc
5 234324@
6 123! vc
df = df[df.column1.str.contains(r'^\d+[<!\-[.*?\]>@]+$')]
print (df)
column1
2 123444234324!!!
5 234324@ | unknown | |
d16870 | val | When you include two parameters, the first parameter of range() is the first element of the range:
offset = 10
for i in range(offset, 100 + offset):
print(i)
If you don't want the end point to move by the offset too, simply remove the + offset from the second argument.
A: You can use map to apply the offset to the results of range at runtime:
offset = 10
for i in map(lambda x: x + offset, range(100)):
print(i)
will print
10
11
...
108
109
A: If you use numpy, there is a very simple and elegant solution:
import numpy as np
a = np.arange(100)
offset = 10
stride = 1
idx = np.r_[1:2, 3:4, 6:8]*stride+offset
print(a[idx])
## [11 13 16 17]
a[idx] = 0
print(a[:20])
## [ 0 1 2 3 4 5 6 7 8 9 10 0 12 0 14 15 0 0 18 19] | unknown | |
d16871 | val | I have looked at the official source code of the Controller class to see what happens when View is called. It turns out, all the different View method overloads ultimately call the following method:
protected internal virtual ViewResult View(string viewName, string masterName, object model)
{
if (model != null)
{
ViewData.Model = model;
}
return new ViewResult
{
ViewName = viewName,
MasterName = masterName,
ViewData = ViewData,
TempData = TempData,
ViewEngineCollection = ViewEngineCollection
};
}
This method (and thus all the other overloads) will never return NULL, although it could throw an exception. It is virtual though, which means that the code your are calling might override it with a custom implementation and return NULL. Could you check if the View method is overridden anywhere?
A: It's possible that the View method is overridden. Try removing the this quantifier.
return View("ConfirmAddress", addressModel); | unknown | |
d16872 | val | You need to pass a query and a callback. .count() is asynchronous and will just return a promise, not the actual document count value.
collection.count({}, function (error, count) {
console.log(error, count);
});
A: Or you can use co-monk and take the rest of the morning off:
var monk = require('monk');
var wrap = require('co-monk');
var db = monk('localhost/test');
var users = wrap(db.get('users'));
var numberOfUsers = yield users.count({});
Of course that requires that you stop doing callbacks... :) | unknown | |
d16873 | val | I'm not familiar with Code::Blocks, but Visual Studio has a concept of a post-build event.
After a successful build, Visual Studio will execute instructions (DOS commands) listed in the post-build events section of a project's properties.
I'm sure Code::blocks will have a similar mechanism. You could use this to copy the DLLs to wherever you need them.
You should also be aware of the DLL search order on Windows. You could also copy the DLLs to a standard location, and your program will look for them there, if they're not found in the same folder as the executable.
I wouldn't be too afraid of modifying the Path variable, it's less permanent than copying various DLLs all over the place!
Sounds like an interesting project. Good luck! | unknown | |
d16874 | val | we are constantly improving Railflow and reports handling, so we are more than happy to add support for the cucumber data tables.
Please contact the support team via our website
Update: This is now implemented and available in Railflow NPM CLI v. 2.1.12 | unknown | |
d16875 | val | As pointed out by @vexe, you just need to escape the backslash. But you can also simplify your function:
function! IsLineEmpty(line)
return match(a:line, "^\\s*$") != -1
endfu
Another solution would be to use single quotes instead of double:
return match(a:line, '^\s*$') != -1
Another solution would be to use Vim's regexp matches operator:
return line =~ '^\s*$'
Alternatively, you could test if the line contains any non-whitespace characters:
return line !~ '[^\s]' | unknown | |
d16876 | val | Well I searched and found one solution to parse date as SQL SERVER timestamp
string currentTime = DateTime.Now.ToString("hh:mm:tt");
byte[] TimeOfAdmission = Encoding.ASCII.GetBytes(currentTime); | unknown | |
d16877 | val | urlencode() it so & turns into %26.
If you need to make a query string out of some parameters, you can use http_build_query() instead and it will URL encode your parameters for you.
On the receiving end, your $_GET values will be decoded for you by PHP, so the query string a=Steak%26Cheese corresponds to $_GET = array('a' => 'Steak&Cheese').
A: Yes, you must URL Encode before request URL. Read this http://www.w3schools.com/TAGS/ref_urlencode.asp
A: Here is a previous post covering this in jquery AJAX requests, but to summarize you have to encoded the uri. This will convert the ampersand value to a ascii value.
Ampersand in GET, PHP | unknown | |
d16878 | val | You mention you already understand options #1 and #3 so I'll focus on option #2.
As I state in the question comments, option #2 doesn't compile as written. However, I believe your intent is to obtain the class in an instance fashion rather than a static fashion (MyClass.class).
public class MyClass {
public void foo() {
synchronized (MyClass.class) {
}
}
public void bar() {
synchronized (getClass()) {
}
}
}
In the above code both MyClass.class and getClass() return the same object which means they are "equivalent". However, you have to be careful here.
public class MySubClass extends MyClass {
// inherits methods...
}
Now the two methods (foo and bar) are not equivalent. The method foo still uses the Class of MyClass to synchronize but bar now uses the Class of MySubClass (i.e. they are no longer synchronizing on the same object).
A: Point 1 will take the lock on the Class Object and only one object can exists (if the same class is not loaded by different classloaders ) in the JVM. This can be used with Static as well as noon static methods
Second option will not compile.
Third Option will take the locks on the current object. Third option can be used with instance method as this can be used in case of static methods. | unknown | |
d16879 | val | Remove the blue line from the top of the view to the small container on top left. | unknown | |
d16880 | val | Declare the first row to be column names and the first column to be row names:
df = pd.DataFrame(data=a[1:], columns=a[0]).set_index(' ')
df.index.name = None
# 0 A T G
#0 0 0 0 0
#G 0 -3 -3 5
#G 0 -3 -6 2
#A 0 5 0 -3 | unknown | |
d16881 | val | Did you try the field "friends"?
https://graph.facebook.com/USER_ID?fields=friends&access_token=ACCESS_TOKEN
If you requirement is just to fetch user's friends and their facebook ids, you can get them here. | unknown | |
d16882 | val | The two important things you need to do are to remove the volumes: declaration that gets the actual application code from your local filesystem, and upload your built Docker image to some registry. Since you're otherwise on AWS, ECR is a ready option; public Docker Hub will work fine too.
Depending on how your Rails app is structured, it might make sense to use the same image with different commands for the main application and the Sidekiq worker(s), and it might work to just make it say
sidekiq:
image: production_image
command: bundle exec sidekiq
Since you're looking at AWS anyways you should also consider the possibility of using hosted services for data storage (RDS for the database, Elasticache for Redis). The important thing is to include the locations of those data stores as environment variables so that you can change them later (maybe they would default to localhost for developer use, but always be something different when deployed).
You'll also notice that my examples don't have links:. Docker provides an internal DNS service for containers to find each other, and Docker Compose arranges for containers to be found via their service key in the YAML file.
Finally, you should be able to test this setup locally before deploying it to EC2. Run docker build and docker-compose up as needed; debug; and if it works then docker push the image(s) and launch it on Amazon.
version: '3'
volumes: *volumes_from_the_question
services:
db: *db_from_the_question
redis: *redis_from_the_question
sidekiq:
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp/sidekiq:1.0
environment:
- PGHOST: db
- REDIS_HOST: redis
app:
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp/app:1.0
ports:
- "80:80"
environment:
- PGHOST: db
- REDIS_HOST: redis | unknown | |
d16883 | val | There is no per-application limit so to say. The amount of memory available to an application depends on the amount of free memory, which again depends on the amount of memory used by applications running in the background. These apps include permanently running system apps like SpringBoard, sometimes running system apps like Safari, iPod, etc and (when iOS 4 will come for iPad) user-apps that still run in background.
Nevertheless, I'd say an app should never use more than 50% of all available ram. On iPad this currently means 128 MB and should be quite a lot. Did you do a leak check on your app? | unknown | |
d16884 | val | The only reason you have a scuffed error message that references anything about dtypes, is because you're using the NumExpr engine.
Here, using the python engine, getting a KeyError is clearer:
>>> df.query("'tom'", engine='python')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/bert2me/miniconda3/envs/deleteme/lib/python3.6/site-packages/pandas/core/frame.py", line 3348, in query
result = self.loc[res]
File "/home/bert2me/miniconda3/envs/deleteme/lib/python3.6/site-packages/pandas/core/indexing.py", line 879, in __getitem__
return self._getitem_axis(maybe_callable, axis=axis)
File "/home/bert2me/miniconda3/envs/deleteme/lib/python3.6/site-packages/pandas/core/indexing.py", line 1110, in _getitem_axis
return self._get_label(key, axis=axis)
File "/home/bert2me/miniconda3/envs/deleteme/lib/python3.6/site-packages/pandas/core/indexing.py", line 1059, in _get_label
return self.obj.xs(label, axis=axis)
File "/home/bert2me/miniconda3/envs/deleteme/lib/python3.6/site-packages/pandas/core/generic.py", line 3493, in xs
loc = self.index.get_loc(key)
File "/home/bert2me/miniconda3/envs/deleteme/lib/python3.6/site-packages/pandas/core/indexes/range.py", line 358, in get_loc
raise KeyError(key)
KeyError: 'tom'
As wjandrea pointed out... this isn't a valid query statement to begin with... did you mean?:
>>> df.query("Name == 'tom'")
Name Age
0 tom 10 | unknown | |
d16885 | val | You can use a LEFT JOIN to achieve the desired results, joining the two tables on matching player ids (noting that player id values in wp_match_scores_test can correspond to either player1_home_id or player1_away_id in wp_schedules_test). If there is no match, the result table will have NULL values from the wp_match_scores_test table values, and you can use that to select the matches which have not been played:
SELECT sch.*
FROM wp_schedule_test sch
LEFT JOIN wp_match_scores_test ms
ON (ms.player1_id = sch.player1_home_id
OR ms.player2_id = sch.player1_home_id)
AND (ms.player1_id = sch.player1_away_id
OR ms.player2_id = sch.player1_away_id)
WHERE ms.ID IS NULL
Output:
ID match_week home_player1 away_player1 player1_home_id player1_away_id
2 Week 1 John Head David Foster 81 175
8 Week 3 John Head Eric Simmons 81 23
12 Week 4 Dale Hemme John Head 169 81
Note that you can also use a NOT EXISTS query, using the same condition as I used in the JOIN:
SELECT sch.*
FROM wp_schedule_test sch
WHERE NOT EXISTS (SELECT *
FROM wp_match_scores_test ms
WHERE (ms.player1_id = sch.player1_home_id
OR ms.player2_id = sch.player1_home_id)
AND (ms.player1_id = sch.player1_away_id
OR ms.player2_id = sch.player1_away_id))
The output of this query is the same. Note though that conditions in the WHERE clause have to be evaluated for every row in the result set and that will generally make this query less efficient than the LEFT JOIN equivalent.
Demo on dbfiddle | unknown | |
d16886 | val | You are probably trying to prepare the MediaPlayer when the MediaPlayer is in a wrong state. Check the MediaPlayer State Diagram. The prepare or prepareAsync methods can be called only if the MediaPlayer is in Initialized or Stopped state.
Since the MediaPlayer state is not accessible, you could use this extension to check what's wrong with your MediaPlayer (remember to remove the external player listeners from your code before trying this)
import android.content.Context
import android.media.MediaPlayer
import android.net.Uri
import android.util.Log
class AudioPlayer : MediaPlayer(), MediaPlayer.OnBufferingUpdateListener, MediaPlayer.OnCompletionListener, MediaPlayer.OnErrorListener, MediaPlayer.OnInfoListener, MediaPlayer.OnPreparedListener {
enum class AudioPlayerState {
IDLE,
INITIALIZED,
PREPARING,
PREPARED,
STARTED,
STOPPED,
PAUSED,
PLAYBACK_COMPLETED,
ERROR,
END
}
var playerState: AudioPlayerState? = null
set(value) {
Log.d(javaClass.name, "Setting new player state $value")
field = value
}
init {
playerState = AudioPlayerState.IDLE
setOnPreparedListener(this)
setOnBufferingUpdateListener(this)
setOnCompletionListener(this)
setOnErrorListener(this)
setOnInfoListener(this)
}
override fun setDataSource(context: Context, uri: Uri) {
if (playerState != AudioPlayerState.IDLE) Log.e(javaClass.name, "Trying to set data source on player state $playerState")
super.setDataSource(context, uri)
playerState = AudioPlayerState.INITIALIZED
}
override fun prepare() {
if (playerState != AudioPlayerState.INITIALIZED && playerState != AudioPlayerState.STOPPED) Log.e(javaClass.name, "Trying to prepare on player state $playerState")
playerState = try {
super.prepare()
AudioPlayerState.PREPARED
} catch (e: Exception) {
e.printStackTrace()
Log.e(javaClass.name, "Exception on audio player prepare ${e.message} with state $playerState")
AudioPlayerState.ERROR
}
}
override fun prepareAsync() {
playerState = try {
super.prepareAsync()
AudioPlayerState.PREPARING
} catch (e: Exception) {
e.printStackTrace()
Log.e(javaClass.name, "Exception on audio player prepare async ${e.message} with state $playerState")
AudioPlayerState.ERROR
}
}
override fun start() {
if (playerState != AudioPlayerState.PREPARED &&
playerState != AudioPlayerState.STARTED &&
playerState != AudioPlayerState.PAUSED &&
playerState != AudioPlayerState.PLAYBACK_COMPLETED
) Log.e(javaClass.name, "Trying to start on player state $playerState")
playerState = try {
super.start()
AudioPlayerState.STARTED
} catch (e: Exception) {
e.printStackTrace()
Log.e(javaClass.name, "Exception on audio player start ${e.message} with state $playerState")
AudioPlayerState.ERROR
}
}
override fun stop() {
if (playerState != AudioPlayerState.PREPARED &&
playerState != AudioPlayerState.STARTED &&
playerState != AudioPlayerState.STOPPED &&
playerState != AudioPlayerState.PAUSED &&
playerState != AudioPlayerState.PLAYBACK_COMPLETED
) Log.e(javaClass.name, "Trying to stop on player state $playerState")
playerState = try {
super.stop()
AudioPlayerState.STOPPED
} catch (e: Exception) {
e.printStackTrace()
Log.e(javaClass.name, "Exception on audio player stop ${e.message} with state $playerState")
AudioPlayerState.ERROR
}
}
override fun pause() {
if (playerState != AudioPlayerState.STARTED &&
playerState != AudioPlayerState.PAUSED &&
playerState != AudioPlayerState.PLAYBACK_COMPLETED
) Log.e(javaClass.name, "Trying to pause on player state $playerState")
playerState = try {
super.pause()
AudioPlayerState.PAUSED
} catch (e: Exception) {
e.printStackTrace()
Log.e(javaClass.name, "Exception on audio player pause ${e.message} with state $playerState")
AudioPlayerState.ERROR
}
}
override fun seekTo(msec: Int) {
if (playerState != AudioPlayerState.PREPARED &&
playerState != AudioPlayerState.STARTED &&
playerState != AudioPlayerState.PAUSED &&
playerState != AudioPlayerState.PLAYBACK_COMPLETED
) Log.e(javaClass.name, "Trying to seek to on player state $playerState")
try {
super.seekTo(msec)
} catch (e: Exception) {
e.printStackTrace()
Log.e(javaClass.name, "Exception on audio player seek to ${e.message} with state $playerState")
playerState = AudioPlayerState.ERROR
}
}
override fun seekTo(msec: Long, mode: Int) {
if (playerState != AudioPlayerState.PREPARED &&
playerState != AudioPlayerState.STARTED &&
playerState != AudioPlayerState.PAUSED &&
playerState != AudioPlayerState.PLAYBACK_COMPLETED
) Log.e(javaClass.name, "Trying to seek to with mode on player state $playerState")
try {
super.seekTo(msec, mode)
} catch (e: Exception) {
e.printStackTrace()
Log.e(javaClass.name, "Exception on audio player seek to with mode ${e.message} with state $playerState")
playerState = AudioPlayerState.ERROR
}
}
override fun getCurrentPosition(): Int {
if (playerState == null) return 0
if (playerState != AudioPlayerState.IDLE &&
playerState != AudioPlayerState.INITIALIZED &&
playerState != AudioPlayerState.PREPARED &&
playerState != AudioPlayerState.STARTED &&
playerState != AudioPlayerState.PAUSED &&
playerState != AudioPlayerState.STOPPED &&
playerState != AudioPlayerState.PLAYBACK_COMPLETED
) Log.e(javaClass.name, "Trying to get current position on player state $playerState")
try {
return super.getCurrentPosition()
} catch (e: Exception) {
e.printStackTrace()
Log.e(javaClass.name, "Exception on audio player current position ${e.message} with state $playerState")
playerState = AudioPlayerState.ERROR
}
return 0
}
override fun isPlaying(): Boolean {
if (playerState == null) return false
if (playerState != AudioPlayerState.IDLE &&
playerState != AudioPlayerState.INITIALIZED &&
playerState != AudioPlayerState.PREPARED &&
playerState != AudioPlayerState.STARTED &&
playerState != AudioPlayerState.PAUSED &&
playerState != AudioPlayerState.STOPPED &&
playerState != AudioPlayerState.PLAYBACK_COMPLETED
) Log.e(javaClass.name, "Trying to get is playing on player state $playerState")
try {
return super.isPlaying()
} catch (e: Exception) {
e.printStackTrace()
Log.e(javaClass.name, "Exception on audio player is playing ${e.message} with state $playerState")
playerState = AudioPlayerState.ERROR
}
return false
}
override fun reset() {
playerState = try {
super.reset()
AudioPlayerState.IDLE
} catch (e: Exception) {
e.printStackTrace()
Log.e(javaClass.name, "Exception on audio player reset ${e.message} with state $playerState")
AudioPlayerState.ERROR
}
}
override fun release() {
playerState = try {
super.release()
AudioPlayerState.END
} catch (e: Exception) {
e.printStackTrace()
Log.e(javaClass.name, "Exception on audio player release ${e.message} with state $playerState")
AudioPlayerState.ERROR
}
}
override fun onPrepared(mp: MediaPlayer?) {
if (playerState != AudioPlayerState.STARTED) {
playerState = AudioPlayerState.PREPARED
}
}
override fun onInfo(mp: MediaPlayer?, what: Int, extra: Int): Boolean {
return false
}
override fun onError(mp: MediaPlayer?, what: Int, extra: Int): Boolean {
playerState = AudioPlayerState.ERROR
return false
}
override fun onCompletion(mp: MediaPlayer?) {
playerState = AudioPlayerState.PLAYBACK_COMPLETED
seekTo(0)
}
override fun onBufferingUpdate(mp: MediaPlayer?, percent: Int) {
}
}
A: According to exception, mediaPlayer in wrong state. Since i cannot see how you have defined the mediaPlayer, i assume you reuse the mediaPlayer. If so, make sure to stop and reset mediaPlayer before set a new dataSource like following in onPostExecute;
Read more on here: mediaPlayer
@Override
protected void onPostExecute(FileCacheMediaDataSource dataSource) {
super.onPostExecute(dataSource);
if (mMediaPlayer == null) {
return;
}
if (dataSource.getSize() == -1) {
LogD.d(TAG, "showErrorScreenSaver");
screenSaverError();
return;
}
try {
// stop and reset media player
mMediaPlayer.stop();
mMediaPlayer.reset();
// set datasource
mMediaPlayer.setDataSource(dataSource);
mMediaPlayer.setLooping(true);
mMediaPlayer.setOnPreparedListener(new MediaPlayer.OnPreparedListener() {
@Override
public void onPrepared(MediaPlayer mp) {
mp.start();
}
});
mMediaPlayer.prepareAsync();
} catch (IOException e) {
LogD.i(TAG, "startPlayFirstVideo = " + e.toString());
}
} | unknown | |
d16887 | val | Keep your 20 column table, to be honest 20 columns is not a lot considering 5 columns are taken, as you tell, by who when (created, edited) and the ID.
There are other considerations when designing a database table, the number of columns should not be a priority. You can think about splitting it into multiple tables when you have some performance issues but for now keep your table. Go to 40 columns and still you should not care.
The person that said never more then 10 was either talking about some performance issues that can be addressed, or it was a specific case where it made a difference or has some strange "standards".
But you should really understand database design, as you will see how to design your tables better, but for now make tables with as many fields you want as long as you respect some normalization rules
A: I think you never want to change qour schema after your database was in use som for some time.
Think about features/functions you propably could wanting to implement in feature, and decide basing on that, if you want to have 40 columns.
I prefer to have everything as compact as possible, so i got a lot of foreign-keys and a lot of small tables for the reason to not write or have saved anything twice. | unknown | |
d16888 | val | These four are MVC frameworks:
*
*CakePHP
*Symfony
*CodeIgniter
*Kohana
I prefer CodeIgniter and Kohana, because they're pretty focused and not bloated at all, and because they both, besides being MVC, are also big on the convention over configuration principle, meaning you don't have to go around maintaining XML/YAML/etc config files of your classes, URL routes, etc.
In particular I like Kohana because it has this nifty file system-based configuration hierarchy (they call it "Cascading Filesystem") which basically means you have even less configuration nonsense to maintain, because based on where you put your app's files (classes, config files, etc), the framework will know which parts of the system will be overridden. So I'd recommend you give Kohana a test run. Beware though, it's relatively new and the documentation is kind of weak, so if your google fu is indeed weak as you say, then you might be better off going straight for CodeIgniter, which has been around for longer and thus has more docs. But I'd still keep an eye on the Kohana project.
Symfony is... too bloated for my taste (i.e. having to run scripts in order to "generate views" and whatnot), but I've seen some large successful projects running on it.
A: Rails is an MVC framework, for PHP you could use CodeIgniter or CakePHP both of those use the MVC design pattern. CodeIgniter is the bomb.
A: You can find more discussion about PHP frameworks here: http://www.quora.com/Whats-the-best-MVC-framework-for-PHP
I haven't looked into other frameworks, I have found Codeigniter to satisfy most of my requirements from an MVC framework.
A: Sure, Theres Codeignitor and Frostbite Framework.. Both are good, and easy to find via google. Here is a whole list of php frameworks: http://matrix.include-once.org/framework/simple
A: Everyone else pretty much nailed it. The only reason I'm adding on to this question is because you use Ruby on Rails, and as such, CakePHP is going to be the most similar framework for you.
I use CodeIgniter because it's very well-documented and lightweight (with very little magic), but that's just my personal preference. Cake will be most like what you're used to.
A: I think laravel is best for you. Remember, frameworks are for SSBs Small scale businesses. For large scale businesses you write your own framework with all planning, execution phases etc. | unknown | |
d16889 | val | This doesn't mean anything. There is no "kernel object" in NT, and any lock you could possibly take would be released if the service were restarted.
A: This depends on what type of application it is. Some applications install and use kernel drivers as part of their usage. A kernel driver has the most low level access possible in the system and is capable of crashing or hanging the system. If the process uses a kernel driver, and the description alludes to this, then yes it can crash / hang the system.
I believe Windows Vista started limiting the amount of damage a kernel driver can accidentally do (graphics drivers especially). But intentionally, you can still cause lots of problems.
A: Depending on which precise kernel object they mean, and which service, this may very well be true. See for instance Raymond Chen on Loader Lock, a kernel lock which applications can monopolize. Restarting the service will then become a problem because the very unload of that service will require the loader lock, too. | unknown | |
d16890 | val | flow_from_directory() returns a DirectoryIterator. The files are not saved until you iterate over this iterator.
For example,
iterator = datagen.flow_from_directory(...)
next(iterator)
will save a batch of augmented images to save_to_dir. You can also use a for loop over the iterator to control how many images will be generated.
A: Its only a declaration, you must use that generator, for example, .next()
datagen.next()
then you will see images in saved | unknown | |
d16891 | val | You can use a map, for instance:
data
|> Seq.map (fun r -> {| a = r.a ; b = r.b |})
|> Frame.ofRecords | unknown | |
d16892 | val | Hi if you are trying to use .mix class with #adm_content_login you can make both of them either a class or id. for instance, if both of them are class you can you it like this
<div class="adm_content_login mix"></div>
notice that there is a space between two classes.
If that's not what you are trying to achieve then you can achieve something like that using LESS(CSS FRAMEWORK)
A: you are using class but you define id
used to this way as like this
<div id="adm_content_login" class="mix"></div> | unknown | |
d16893 | val | Instead of
TransformerFactory transformerFactory = TransformerFactory.newInstance();
you should write
TransformerFactory transformerFactory = new TransformerFactoryImpl();
because not all implementations of TransformerFactory have this field: indent-number.
A: I fixed that exception by commenting this line:
transformerFactory.setAttribute("indent-number", indent);
and adding this line:
transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "2");
The exception is gone even though the indetation that appears in the browser is incorrect.
A: Likely because Xalan (as packaged in JDK1.6/1.7) support "indent-number", yet others don't and have their own way of specifying the size of indent. So you have to put in the string appropriate for the XSLT provider. Work out which you're using and see its docs
Aren't standards that don't specify such things great?
A: You should use the predefined constant OutputKeys.INDENT, or if you really insist on hardcoding the value, it should be 'indent', not 'indent-number'. | unknown | |
d16894 | val | Your List view Recycle the rows when you scroll up or down It improver the performance of list view. But if you dont want to recycle it you can stop it. | unknown | |
d16895 | val | I see you have a SocketTimeoutException, perhaps you can catch this exception and connect using a new socket?
A: There is a bug that causes the system to reuse old http connections.
Setting the system property http.keepAlive to false should resolve the problem:
System.setProperty("http.keepAlive", "false"); | unknown | |
d16896 | val | Here's an option that doesn't totally do what you're asking, but should get you close if you add some css:
library(magrittr)
library(htmlwidgets)
library(rhandsontable)
library(shiny)
DF = data.frame(
comments = c(
"I would rate it ★★★★☆",
"This is the book about JavaScript"
),
cover = c(
"https://as1.ftcdn.net/v2/jpg/03/35/13/14/1000_F_335131435_DrHIQjlOKlu3GCXtpFkIG1v0cGgM9vJC.jpg",
"https://as1.ftcdn.net/v2/jpg/03/35/13/14/1000_F_335131435_DrHIQjlOKlu3GCXtpFkIG1v0cGgM9vJC.jpg"
),
text = c(
"descriptive text about img 1",
"descriptive text about img 2"
),
stringsAsFactors = FALSE
)
ui <- fluidPage(br(),rHandsontableOutput('my_table'))
server <- function(input, output, session) {
output$my_table <- renderRHandsontable({
rhandsontable::rhandsontable(DF, allowedTags = "<em><b><strong><a><big>",
width = 800, height = 450, rowHeaders = FALSE) %>%
hot_cols(colWidths = c(200, 80)) %>%
hot_col(1, renderer = htmlwidgets::JS("safeHtmlRenderer")) %>%
hot_col(2, renderer = "
function(instance, td, row, col, prop, value, cellProperties) {
var escaped = Handsontable.helper.stringify(value),
img;
if (escaped.indexOf('http') === 0) {
img = document.createElement('IMG');
img.src = value; img.style.width = 'auto'; img.style.height = '80px';
Handsontable.dom.addEvent(img, 'mousedown', function (e){
var exists = document.getElementById('test')
if (exists === null){
var textBlock = instance.params.data[[row]][[2]];
var popup = document.createElement('div');
popup.className = 'popup';
popup.id = 'test';
var cancel = document.createElement('div');
cancel.className = 'cancel';
cancel.innerHTML = '<center><b>close</b></center>';
cancel.onclick = function(e) {
popup.parentNode.removeChild(popup)
}
var message = document.createElement('span');
message.innerHTML = '<center>' + textBlock + '</center>';
popup.appendChild(message);
popup.appendChild(cancel);
document.body.appendChild(popup);
}
});
Handsontable.dom.empty(td);
td.appendChild(img);
}
else {
// render as text
Handsontable.renderers.TextRenderer.apply(this, arguments);
}
return td;
}") %>%
hot_cols(colWidths = ifelse(names(DF) != "text", 150, 0.1))
})
}
shinyApp(ui, server) | unknown | |
d16897 | val | OK, removing pure semantics from your question (which, in my mind, does have a material impact on deciding on implementing your chosen method) and concentrating on pure "SEO" value and impact:
The first example needs to be qualified more, as if we take your example as literal, then you are linking to the same page.html 3 times. Google (specifically) only takes the link anchor value from the 1st link to any page that it comes across, so - the value for the first example is only extracted from that first link. The 2nd link (using an IMG tag with an ALT attribute as the anchor value), and the 3rd link using read more as the anchor value are effectively "ignored". It's important that other signals are used to supplement the first link's true intended value, such as surrounding text, images etc.
The 2nd example (HTML5), wraps all of that semantic/surrounding content up to make the effective 'anchor' value from which search engines will derive the link's intended meaning, and then as a consequence, the meaning of the destination page of the link.
Using an anchor tag as a containing wrapper for content that contains additional emphasis (the H tag), an image and an additional div only increases the difficulty that a search engine has to decipher the intended meaning of the link so it can associate it with the destination page.
Search engines (and Google predominantly) are constantly improving their crawling ability to enable better algorithmic parsing and processing of the HTML. Apart from emphasis signals (which are very low), Google mostly ignores the mark-up. The exception is of course links - so making an effort to simplify the parsing/processing by providing clear signals as to a link's anchor text is the safest way forward. Expecting them to understand all of the differences of HTML3, vs HTML4, vs HTML5 and all of the transitional, strict and other variations of each, is probably expecting too much.
TL;DR
Possibly, but only in terms of true link value.
A: As far as i know in the second way is not bad in anyway in term of seo But first may be slightly better as the titles,images are more closely linked to link.
Q. But better by how much?
A. May be not too much | unknown | |
d16898 | val | Just an update, here's what I use for MacOS, for anyone else that's wondering. You can include variables too. (This is Zsh)
curl -g -k -T ~/Documents/myfile.txt ftps://myusername:[email protected]/mydirectorypath/
curl -g -k -T ~/Documents/$(echo "$myfile")_.txt ftps://myusername:[email protected]/mydirectorypath/
A: curl -k -T /localfilepath/temptestfile.txt ftps://yourusername:[email protected]/somedirectory/
A: In case someone is looking the answer for the ftp protocol (I googled it and found this question):
curl -T local_file.txt -u "login:password" ftp://my-ftp-server.com/remote/ftp/path/
It is also possible to use -n option instead of -u with curl to use ~/.netrc file for storing password information (might be safer in scripts). | unknown | |
d16899 | val | Yes, it can be removed.
I put together this AHK script that will hide the scrollbar in mIRC, and will keep it hidden even after resizing the window, as well as minimizing and restoring the window.
Load this up in autohotkey and it will start working once you click on a channel or anything in the switchbar.
~LButton::
MouseGetPos, , , , OutputVarControl
if (OutputVarControl = "mIRC_SwitchBar1" or OutputVarControl = "ScrollBar1")
{
WinWait, ahk_class mIRC
Control, Style, Hide, ScrollBar1
ControlGetPos, x,, Width,, mIRC_Channel1
ControlMove, mIRC_Channel1, ,, (Width + 18)
ControlGetPos, x,, Width,, mIRC_Query1
ControlMove, mIRC_Query1, ,, (Width + 18)
ControlGetPos, x,, Width,, mIRC_Status1
ControlMove, mIRC_Status1, ,, (Width + 18)
ControlGetPos, x,y, Width,Height, ListBox1
ControlMove, ListBox1, (x - 18),, ,
}
else if (OutputVarControl = "MSTaskListWClass1")
{
sleep, 500
if WinActive("ahk_class mIRC"){
WinWait, ahk_class mIRC
Control, Style, Hide, ScrollBar1
WinGetPos, X, Y, W, H, ahk_class mIRC
WinMove, ahk_class mIRC, , , , (W + 18),,
WinGetPos, X, Y, W, H, ahk_class mIRC
WinMove, ahk_class mIRC, , , , (W - 18),,
ControlGetPos, x,, Width,, mIRC_Channel1
ControlMove, mIRC_Channel1, ,, (Width + 18)
ControlGetPos, x,, Width,, mIRC_Query1
ControlMove, mIRC_Query1, ,, (Width + 18)
ControlGetPos, x,, Width,, mIRC_Status1
ControlMove, mIRC_Status1, ,, (Width + 18)
ControlGetPos, x,y, Width,Height, ListBox1
ControlMove, ListBox1, (x - 18),, ,
}
}
return
Assumes you use mIRC with one window maximized at a time inside the mIRC client like this. But you can have as many channels/query windows open, the script can handle switching between them.
A: No, The scroll-bar at the main custom window, can not be removed. | unknown | |
d16900 | val | You can use JavaScript for this
JavascriptExecutor executor = (JavascriptExecutor)driver;
executor.executeScript("arguments[0].setAttribute(arguments[1], arguments[2]);", nextListByNumber, "href", hrefNew); | unknown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.