_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d14001 | val | While you can use compact to eliminate the nil values from your array, I'm not sure why you need this in the first place.
Doing
case name
when 'a'
return "SQL statement"
when 'b'
return "SQL statement"
when 'c'
return "SQL statement"
when 'd'
return "SQL statement"
end
is way more intuitive.
A: return [object1, object2].compact
you can use compact method to remove nil value of array.
A: You could write:
return (['a', 'b'].include?(name) && - SQL Logic 1 -) ||
(['c', 'd'].include?(name) && - SQL Logic 2 -)
If it's the last line of a method, you don't need return.
Let's see what's happening.
If ['a', 'b'].include?(name) #=> true, - SQL Logic 1 -, which is truthy, will be returned.
If ['a', 'b'].include?(name) #=> false, the first && clause is false (- SQL Logic 1 - is not executed), so we consider the other || clause. ['c', 'd'].include?(name) must be true, so we return - SQL Logic 2 -. | unknown | |
d14002 | val | Change the code like below for second function as well.
int Matrix::operator()(int i, int j) const
{
int _size = (_vec.size() + 2) / 3;
if ((i >= _size || i < 0) || (j >= _size || j < 0)) throw OVERINDEXED;
if (i != j && j != 0 && j != _size - 1) return 0;
else {
if (j == 0)
{
return _vec[i];
}
else if (j == _size - 1)
{
return _vec[_size + i];
}
else if (i == j && j != 0 && j != _size - 1)
{
return _vec[(_size * 2) + i];
}
}
return 0; // added this line
}
A: It requires some analysis to prove that all cases return.
And it seems your compiler doesn't do the full analysis
int Matrix::operator()(int i, int j) const
{
int _size = (_vec.size() + 2) /3;
if ((i >= _size || i < 0) || (j >= _size || j < 0)) throw OVERINDEXED;
if (i != j && j != 0 && j != _size - 1) return 0;
else {
if (j == 0)
{
return _vec[i];
}
else if (j == _size - 1)
{
return _vec[_size + i];
}
else if (i == j && j != 0 && j != _size - 1)
{
return _vec[(_size * 2) + i];
}
else
{
// No return here.
// But is this case reachable?
// yes, for (i, j) respecting:
// (0 <= i && i < _size) && (0 <= j && j < _size)
// && ((i == j) || (j == 0) || (j == _size - 1)) // #2
// && (j != 0) && (j != _size - 1) // #1
// && (i != j || j == 0 || j == _size - 1) // #3
// which after simplification results indeed in false.
// #1 simplifies #2 to (i == j) and #3 to (i != j)
}
}
}
On the other part, that means that you do useless "tests" that you can remove (and so please the compiler):
int Matrix::operator()(int i, int j) const
{
int _size = (_vec.size() + 2) /3;
if ((i >= _size || i < 0) || (j >= _size || j < 0)) throw OVERINDEXED;
if (i != j && j != 0 && j != _size - 1) return 0;
else {
if (j == 0)
{
return _vec[i];
}
else if (j == _size - 1)
{
return _vec[_size + i];
}
else // We have necessary (i == j && j != 0 && j != _size - 1)
{
return _vec[(_size * 2) + i];
}
}
} | unknown | |
d14003 | val | The range of elements is small. So create an array of counters for the possible values and increment the count for each value you find. For example, if you find 2, increment counter[2].
Then given your collection of numbers, just do an array lookup to get the count.
A: The time complexity is O(max(m,n)) where m is the size of the array and n is the size of the collection. The space required is O(p) where p is the range of the integers that may appear in the array. We'll use p0 to denote the lower bound of this range.
The solution is simple:
*
*Construct an array C of size p and set all values to zero
*Walk over the input array and for each value k - increase C[k-p0] by 1. Now you have a count of each value.
*Walk over the collection and for each value k - print C[k-p0]
A: A very simple way to do this would be to have your resulting array's keys be where the values are.
My C skills are kind of weak, but here is how in pseudo-C (you'll have to tweak it to make it work properly:
int size = 10*10*10*10*10;
int[size] input = getInput();
int[size] results = new int[size](0); // fill array with zeros
for(int i = 0; i < size; i++) {
if (input[i] === -1)
break; // assuming -1 signifies the end of the input
output[input[i]]++; // increment the value at the index that matches the number by one
}
for(int i = 0; i < size; i++) {
if (output[i] === 0)
continue; // number wasn't in array
printf('%d', output[i]);
}
Basically, you put the input of the array in the matching index of output.
So, if your input is 5,3,2,1,1,5,3,2,1, you would put into output:
output[5]++;
output[3]++;
output[2]++;
output[1]++;
output[1]++;
output[5]++;
output[3]++;
output[2]++;
output[1]++;
resulting in an array that looks like this:
[0, 3, 2, 2, 0, 1]
Then you just output it, skipping the zeros because they weren't present.
A: You can simply make a array of 10^5 and initialize it with 0. Now iterate over array and increment the value in array. Like you encounter a 5 increment arr[5] and now you can answer the queries in O(1) time.
Below is a code in java.
import java.util.Scanner;
public class test
{
public static void main(String args[])
{
Scanner in=new Scanner(System.in());
int n=s.nextInt();
int arr[]=new int[100001]; //Array is initialized 0
for(int i=0;i<n;i++)
{
int num=s.nextInt();
arr[num]++;
}
while(s.hasnextInt())
{
int p=s.nextInt();
System.out.println(arr[p]);
}
}
} | unknown | |
d14004 | val | Okay, I found the solution to the problem and it was quite simple in reality.
I disabled the Visual Studio ClearCase integration on the build server.
VS is being used as we need to build deployment projects and so we call devenv to do this for us. However we are only using it as a build engine, there is never a need for the build engine to know how to modify the source items as they will all have just come from ClearCase. The only items we allow the build server to modify are the assembly file version number attributes in the AssemblyInfo files, but we do that in the NAnt and not in Visual Studio.
So, disable the functionality and problem goes away. Probably not the solution for everybody but on a build server it was the way forward.
A: This troubleshooting item, got me to this fix pack which has solved the issue in our development environment. We're still using VS2005, but I would expect that it's the same issue as what was going wrong in VS2008.
A: This seems to be normal for VS2008, it checks out the .sln file when opening a solution. I dont like it either.
Your problem however is the fact that the .suo file is also checked in. This file should not be placed under source control. It is like the proj.user files. I suspect suo stands for Solution User Options.
A: You could:
*
*update your script in order to hijack the sln file in your snapshot view just after your "cleartool update -force -overwrite".
*or, to avoid the sln to be checked-out, you could try and keep the .suo file checked-in,
If the above suggestion works, then here is a couple of reason why one would want to keep this file under version control:
*
*Since the .suo file is disposable (VS2008 just creates a new one if it does not exist), having under source control might been seen as a way to avoid that creation (hence avoiding the ClearCase plugin detecting it and trying to "add to source control" it or checking it out).
*Another advantage to having .suo file under version control (but not updated through any further checkout/checkin) is when you are comparing your checked out project with another checked out version of the same project downloaded elsewhere: that file will always be identical, as opposed to systematically different (since if is a binary file, and any new version of a binary file would register itself as different) | unknown | |
d14005 | val | I suggest having a separate project (module) within your multi-module build for reporting on the whole project. You might need the JacocoMerge task too. Let's assume a, b and c are java projects. Eg
Eg:
def javaProjects = [':a', ':b', ':c']
javaProjects.each {
project(it) {
apply plugin: 'java'
apply plugin: 'jacoco'
}
}
project(':report') {
FileCollection execData = files(javaProjects.collect { project(it).tasks.withType(Test).jacoco.destinationFile })
FileCollection sourceDirs = files(javaProjects.collect { project(it).sourceSets.main.java.srcDirs })
FileCollection classDirs = files(javaProjects.collect { project(it).sourceSets.main.java.output.classesDirs })
def testTasks = javaProjects.collect { project(it).tasks.withType(Test)}
task jacocoMerge(type: JacocoMerge) {
dependsOn testTasks
executionData execData
jacocoClasspath = classDirs
}
task coverageVerification(type: JacocoCoverageVerification) {
dependsOn jacocoMerge
executionData jacocoMerge.destinationFile
sourceDirectories srcDirs
classDirectories classDirs
violationRules.rule.limit.minimum = 0.38
}
task jacocoReport(type: JacocoReport) {
dependsOn jacocoMerge
executionData jacocoMerge.destinationFile
sourceDirectories srcDirs
classDirectories classDirs
}
}
A: Yes, I find another solution for it, that is to write limit rule in each module, such as for module-A, its code coverage is 50%, so in the build.gradle file in module-A, the rule is as following:
// for moudle-A module
jacocoTestCoverageVerification {
violationRules {
rule {
limit {
minimum = 0.5
}
}
} | unknown | |
d14006 | val | have you looked at all of the data flavors on the clipboard?
the plain string might not have the nonbreaking character, but one of the other more specific kinds might? in word/excel do "paste special" to see what other formats are available, or enumerate them in code.
I'm betting there are multiple kinds of data on the clipboard, and word prefers one and excel favors the other, and you're getting just the plain text string in your code? | unknown | |
d14007 | val | def gen_data():
subitems = []
for subitem in range(2):
subitems.append({
'title': subitem,
'prop': None,
})
data = []
for item in range(3):
data.append({
'title': item,
'subitems': subitems,
})
return data
You are inserting the same subitems array into the data every time, so when you write one of its values:
data[item_index]['subitems'][subitem_index]['prop'] = value
You are updating that same subitem.
Solution - put the sub-item generation code inside the data generation loop, to create an independent subitems array for each data item:
def gen_data():
data = []
for item in range(3):
subitems = []
for subitem in range(2):
subitems.append({
'title': subitem,
'prop': None,
})
data.append({
'title': item,
'subitems': subitems,
})
return data
The outputs now match as expected.
As a side note, since you are already iterating over the elements of the subitems arrays, the write statement can simply become:
subitem['prop'] = value
A: The problem is here:
def gen_data():
subitems = []
for subitem in range(2):
subitems.append({
'title': subitem,
'prop': None,
})
data = []
for item in range(3):
data.append({
'title': item,
'subitems': subitems, # here
})
return data
Within this function, you only made one subitems list, and you're referring to it in multiple places. If you want different copies, create that list within the loop:
def gen_data():
data = []
for item in range(3):
subitems = []
for subitem in range(2):
subitems.append({
'title': subitem,
'prop': None,
})
data.append({
'title': item,
'subitems': subitems, # here
})
return data | unknown | |
d14008 | val | For some reason, it got stuck because I had manually created two named pipes in the same folder. Deleting the pipes allowed the make process to terminate successfully.
EDIT: I'm only posting this because Googling did not give me any good results, and I think it would save someone else some time if they could find it easily when it occurs.
A: I was having the same problem. I think its because i had setup angular project for UI inside my webapp folder, because of this it was loading all the dependencies from node_modules folder which takes lot of time.
When i took backup of my that angular project and deleted it from the webapp, it worked. | unknown | |
d14009 | val | Most likely driver1 doesn't exist.
Try this
var temp = data.exists ? data.data() : "Doc does not exist";
Print list should return [Doc does not exist] instead of null.
Check if there is a white space before driver1 i.e _driver1 or after it. Otherwise there's no other explanation. | unknown | |
d14010 | val | I don't think you can send file on server using AJAX, 'cause you don't have access to file system via JavaScript. I don't believe what you're trying to do is possible without Flash or Silverlight. Try SWFUpload, for instance. I was using it on my previous project, and it worked fine for me.
EDIT:
And about returning the URL. SWFUpload lets you to send some parameters along with file. So you could generate GUID (on the server side, then just insert it in JavaScript for SWFUpload as an additonal post parameter) and once SWFUpload finished with uploading image, save it on the server using posted GUID as a name of file. After that you can make AJAX request from JavaScript using the same GUID and ask for uploaded image. Hope this helps. | unknown | |
d14011 | val | You can check the Keras FAQ and especially the section "Why is the training loss much higher than the testing loss?".
I would also suggest you to take some time and read this very good article regarding some "sanity checks" you should always take into consideration when building a NN.
In addition, whenever possible, check if your results make sense. For example, in case of a n-class classification with categorical cross entropy the loss on the first epoch should be -ln(1/n).
Apart your specific case, I believe that apart from the Dropout the dataset split may sometimes result in this situation. Especially if the dataset split is not random (in case where temporal or spatial patterns exist) the validation set may be fundamentally different, i.e less noise or less variance, from the train and thus easier to to predict leading to higher accuracy on the validation set than on training.
Moreover, if the validation set is very small compared to the training then by random the model fits better the validation set than the training.]
A: There are a number of reasons this can happen.You do not shown any information on the size of the data for training, validation and test. If the validation set is to small it does not adequately represent the probability distribution of the data. If your training set is small there is not enough data to adequately train the model. Also your model is very basic and may not be adequate to cover the complexity of the data. A drop out of 50% is high for such a limited model. Try using an established model like MobileNet version 1. It will be more than adequate for even very complex data relationships. Once that works then you can be confident in the data and build your own model if you wish.
Fact is validation loss and accuracy do not have real meaning until your training accuracy
gets reasonably high say 85%.
A: I solved this by simply increasing the number of epochs
A: This indicates the presence of high bias in your dataset. It is underfitting. The solutions to issue are:-
*
*Probably the network is struggling to fit the training data. Hence, try a
little bit bigger network.
*Try a different Deep Neural Network. I mean to say change the architecture
a bit.
*Train for longer time.
*Try using advanced optimization algorithms.
A: This actually a pretty often situation. When there is not so much variance in your dataset you could have the behaviour like this. Here you could find an explaination why this might happen.
A: This happens when you use Dropout, since the behaviour when training and testing are different.
When training, a percentage of the features are set to zero (50% in your case since you are using Dropout(0.5)). When testing, all features are used (and are scaled appropriately). So the model at test time is more robust - and can lead to higher testing accuracies.
A: Adding dropout to your model gives it more generalization, but it doesn't have to be the cause. It could be because your data is unbalanced (has bias) and that's what I think..
A: I don't think that it is a drop out layer problem.
I think that it is more related to the number of images in your dataset.
The point here is that you are working on a large training Set and a too small validation/test set so that this latter is way too easy to computed.
Try data augmentation and other technique to get your dataset bigger!
A: I agree with @Anas answer, the situation might be solved after you increase the epoch times.
Everything is ok, but sometimes, it is just a coincidence that the initialized model exhibits a better performance in the validation/test dataset compared to the training dataset. | unknown | |
d14012 | val | If the pattern doesn't match any path, the result will be empty indeed.
You have to split the MATCH in 2 and make the second one OPTIONAL, or in your actual case, stop matching the same u1 node over and over again:
MATCH (u1:User {user_id: 4})
OPTIONAL MATCH (u1)-[:FOLLOWS]->(:User)-->(r1:Rest {city_id: 1})
WITH u1, collect({ REST: r1.res_id }) AS rows
OPTIONAL MATCH (u1)-->(r2:Rest {city_id: 1})
WHERE NOT (u1)-[:BEEN_THERE | ADD_REVIEW]->(r2)
WITH rows + collect({ REST: r2.res_id }) AS allrows
UNWIND allrows as row
RETURN row.REST AS RESTAURANT_ID, count(row.REST) AS COUNT
ORDER BY COUNT desc
LIMIT 15
I'm not sure about the first OPTIONAL MATCH in your case (you only mention the second collect as being a blocker), but if you want the aggregation of both patterns where each can be empty, here you go. | unknown | |
d14013 | val | I'm not sure why this would be necessary, but I suppose you could wrap the tests you want to repeat in a for loop from 0 to N.
If you define N using int.fromEnvironment then you can pass in a value for N at the command line.
flutter test --dart-define=N=100
import 'package:flutter/material.dart';
import 'package:flutter_test/flutter_test.dart';
import 'package:example/main.dart';
const N = int.fromEnvironment('N', defaultValue: 1);
void main() {
for (int i = 0; i < N; i++) {
testWidgets('Counter increments smoke test $i',
(WidgetTester tester) async {
// Build our app and trigger a frame.
await tester.pumpWidget(const MyApp());
// Verify that our counter starts at 0.
expect(find.text('0'), findsOneWidget);
expect(find.text('1'), findsNothing);
// Tap the '+' icon and trigger a frame.
await tester.tap(find.byIcon(Icons.add));
await tester.pump();
// Verify that our counter has incremented.
expect(find.text('0'), findsNothing);
expect(find.text('1'), findsOneWidget);
});
}
}
Alternatively you could write a program that runs flutter test N times. | unknown | |
d14014 | val | There are three scenarios where it is useful to use a character reference:
*
*When you aren't encoding the document in a Unicode encoding (hopefully you won't be this century)
*When you are using a character with special meaning in HTML (such as a ' inside an attribute value delimited by ' characters)
*When you don't have a keyboard layout with which you can type the character you want to use | unknown | |
d14015 | val | try this, in your solution Twitter and Youtube are local variables in the function finished, after the function returns they no longer exist, if the function gets called again they are craeted again but of course they don´t have the value of last time since they are new variables, maybe google for 'javascript variable scope' for more information about this topic.
var Twitter = false;
var YouTube = false;
function finished(source) {
if (source == "Twitter") {
Twitter = true;
}
if (source == "YouTube") {
YouTube = true;
}
console.log(source);
console.log(Twitter);
console.log(YouTube);
if (Twitter && YouTube) {
console.log(socialPosts[0]);
}
}
Also add an error handler to your json-request via
$.getJSON(...).fail(function () {console.log('error');}); | unknown | |
d14016 | val | If you want to prompt the user for something from the terminal, the easiest way is probably to use java.io.Console, in particular one of its readLine() methods:
import java.io.Console;
...
Console console = System.console();
if (console == null) {
throw new IllegalStateException("No console to read input from!");
}
String fileName = console.readLine("What would you like to name your file? ");
// Whatever the user inputed is now in fileName
See http://docs.oracle.com/javase/7/docs/api/java/io/Console.html. | unknown | |
d14017 | val | A very straightforward way is to use one of the rank functions from "dplyr" (eg: dense_rank, min_rank). Here, I've actually just used rank from base R. I've deleted some columns below just for presentation purposes.
library(dplyr)
mydf %>% mutate(bin = rank(BR))
# range X0 X1 total BR ... Index bin
# 1 (1,23] 5718 194 5912 0.03281461 ... 1.534535 2
# 2 (23,26] 5249 330 5579 0.05915039 ... 1.207544 8
# 3 (26,28] 3105 209 3314 0.06306578 ... 1.292856 10
# 4 (28,33] 6277 416 6693 0.06215449 ... 1.272937 9
# 5 (33,37] 4443 239 4682 0.05104656 ... 1.033207 5
# 6 (37,41] 4277 237 4514 0.05250332 ... 1.064326 7
# 7 (41,46] 4904 265 5169 0.05126717 ... 1.037913 6
# 8 (46,51] 4582 230 4812 0.04779717 ... 1.037198 4
# 9 (51,57] 4039 197 4236 0.04650614 ... 1.067437 3
# 10 (57,76] 3926 105 4031 0.02604813 ... 1.946684 1
If you just want to reorder the rows, use arrange instead:
mydf %>% arrange(BR)
A: bbT11$Bin[order(bbT11$BR)] <- 1:nrow(bbT11) | unknown | |
d14018 | val | If decimal.MinValue were only declared as a static readonly field, you wouldn't be able to use it as a compile-time constant elsewhere - e.g. for things like the default value of optional parameters.
I suppose the BCL team could provide both a constant and a read-only field, but that would confuse many people. If you're in the very rare situation where it makes a difference, introducing your own field looks like an entirely reasonable workaround.
Alternatively, the compiler could decide to just copy the value of the field in cases where it's feasible to do so instead of using the constructor. That would potentially end up accessing memory that wouldn't otherwise be touched - micro-tweaks like this can end up with unintended side-effects. I suspect that making the compiler handling of this simple was deemed more important than trying to guess what's going to be most efficient in every scenario. | unknown | |
d14019 | val | Assuming that the question is about the difference of Cipher.getInstance("AES") and Cipher.getInstance("AES/CFB/NoPadding"):
For Oracle JDK the default mode/padding when you do not specify them in the transformation string is "ECB/PKCS5Padding", meaning that Cipher.getInstance("AES") is the same as Cipher.getInstance("AES/ECB/PKCS5Padding").
The result of encoding some data with AES/ECB/PKCS5Padding is predictably different from encoding the same data with AES/CFB/NoPadding.
To minimize the confusion you should always specify the full transformation with explicit mode and padding values. | unknown | |
d14020 | val | If you need to use the ActiveCell, you can use something like the code below:
Dim ShtName As String
ShtName = ActiveCell.Value2 ' <-- save the value of the ActiveCell
Set wb = Application.Workbooks.Open(FilePath)
wb.Worksheets(1).Copy After:=activeWB.Sheets(activeWB.Sheets.Count)
' rename the sheet
activeWB.Sheets(activeWB.Sheets.Count).Name = ShtName
A: You've copied the new worksheet to the end, so you can get hold of the last worksheet like so:
Set activeWB = ActiveWorkbook
' You copied sheet 1
wb.Worksheets(1).Copy After:=activeWB.Sheets(activeWB.Sheets.Count)
' New sheet is last sheet
Dim sh As WorkSheet
Set sh = activeWB.Sheets(activeWB.Sheets.Count)
' Rename sheet
sh.Name = activeWB.Sheets("Arkusz1").ActiveCell.Value
Please note, relying on selections and active cells/sheets/workbooks is risky, and can create hard-to-diagnose errors. Try and use cell addresses instead of active cells. | unknown | |
d14021 | val | *
*java.io - difference between streams and writers. Buffered streams.
*java.util - the collection framework. Set and List. What's HashMap, TreeMap. Some questions on efficiency of concrete collections
*java.lang - wrapper types, autoboxing
*java.util.concurrent - synchronization aids, atomic primitives, executors, concurrent collections.
*multithreading - object monitors, synchronized keyword, methods - static and non-static.
A: I'd say there are two things you need for every java interview:
*
*For Basic knowledge of the Language, consult your favorite book or the Sun Java Tutorial
*For Best Practices read Effective Java by Joshua Bloch
Apart from that, read whatever seems appropriate to the job description, but I'd say these two are elementary.
I guess these packages are relevant for every java job:
*
*java.lang (Core classes)
*java.io (File and Resource I/O)
*java.util (Collections Framework)
*java.text (Text parsing / manipulation)
A: IMHO its more important to have a firm understanding of the concepts rather than specific knowledge of the API and especially the internal workings of specific classes. For example;
*
*knowing that HashMap is not synchronized is important
*knowing how this might affect a multithreaded app is important
*knowing what kind of solutions exist for this problem is important
A: I wouldn't worry too much about specific API details like individual methods of ConcurrentHashMap, unless you're interviewing for a job that is advertised as needing a lot of advanced threading logic.
A thorough understanding of the basic Java API's is more important, and books like Effective Java can help there. At least as important though is to know higher level concepts like Object Orientation and Design Patterns.
Understanding what Polymorphism, Encapsulation and Inheritance are, and when and how to use them, is vital. Know how to decide between Polymorphism and Delegation (is-a versus has-a is a decent start, but the Liskov Substitution Principle is a better guide), and why you may want to favor composition over inheritance. I highly recommend "Agile Software Development" by Robert Martin for this subject. Or check out this link for an initial overview.
Know some of the core patterns like singleton, factory, facade and observer. Read the GoF book and/or Head First Design Patterns.
I also recommend learning about refactoring, unit testing, dependency injection and mocking.
All these subject won't just help you during interviews, they will make you a better developer.
A: We usually require the following knowledge on new developers:
Low level (programming) questions:
http://www.interview-questions-java.com/
Antipatterns:
http://en.wikipedia.org/wiki/Anti-pattern
Design:
http://en.wikipedia.org/wiki/Design_Patterns
A: On some of the interviews I have been to, there is also the java.io package covered, sometimes with absurd questions on what kind of exceptions would some rarely used method declare to throw, or whether some strange looking constructor overload exists.
Concurrency is always important for higher-level positions, but I think that knowing the concepts well (and understanding them ofc) would win you more points than specific API knowledge.
Some other APIs that get mentioned at interviews are Reflection (maybe couple questions on what can be achieved with it) and also java.lang.ref.Reference and its subclasses.
A: I ask some basic questions ('whats the difference between a list and a set?', 'whats an interface?', etc) and then I go off the resume. If hibernate is on there 5 times, I expect the candidate to be able to define ORM. You would be surprised how often it happens that they can't. I am also interested in how the candidate approaches software -- do they have a passion for it? And it is very important that the candidate believes in TDD. Naturally, if its a really senior position, the questions will be more advanced (e.g. 'whats ThreadLocal and when do you use it'), but for most candidates this is not necessary.
A: Completely agree with Luke here. We can not stick to some API's to prepare for Core Java interviews. I think a complete understanding of the OOPS concept in Java is must. Have good knowledge of oops shows the interviewer that the person can learn new API's easily and quick.
Topics that should be covered are as follows:
OOPS Concept
Upcasting & DownCasting
Threading
Collection framework.
Here is a good post to get started. Core Java Interview Q & A | unknown | |
d14022 | val | I ran into this problem myself. Here is my solution which I have tested in Firefox and Chrome:
Ensure the contenteditable div has the css white-space: pre, pre-line or pre-wrap so that it displays \n as new lines.
Override the "enter" key so that when we are typing, it does not create any <div> or <br> tags
myDiv.addEventListener("keydown", e => {
//override pressing enter in contenteditable
if (e.keyCode == 13)
{
//don't automatically put in divs
e.preventDefault();
e.stopPropagation();
//insert newline
insertTextAtSelection(myDiv, "\n");
}
});
Secondly, override the paste event to only ever fetch the plaintext
//override paste
myDiv.addEventListener("paste", e => {
//cancel paste
e.preventDefault();
//get plaintext from clipboard
let text = (e.originalEvent || e).clipboardData.getData('text/plain');
//insert text manually
insertTextAtSelection(myDiv, text);
});
And here is the supporting function which inserts text into the textContent of a div, and returns the cursor to the proper position afterwards.
function insertTextAtSelection(div, txt) {
//get selection area so we can position insert
let sel = window.getSelection();
let text = div.textContent;
let before = Math.min(sel.focusOffset, sel.anchorOffset);
let after = Math.max(sel.focusOffset, sel.anchorOffset);
//ensure string ends with \n so it displays properly
let afterStr = text.substring(after);
if (afterStr == "") afterStr = "\n";
//insert content
div.textContent = text.substring(0, before) + txt + afterStr;
//restore cursor at correct position
sel.removeAllRanges();
let range = document.createRange();
//childNodes[0] should be all the text
range.setStart(div.childNodes[0], before + txt.length);
range.setEnd(div.childNodes[0], before + txt.length);
sel.addRange(range);
}
https://jsfiddle.net/1te5hwv0/
A: Sadly, you can’t. As this answer points out the spec only specifies true, false and inherit as valid parameters. The subject seems to have been discussed but if I’m not mistaken only Webkit implements support for plaintext-only. | unknown | |
d14023 | val | I have made a small example to display with an image. like when you scroll down some animation will be shown and once you scroll back to up animation will be revert.
CSS STYLE:
.classname {
-webkit-animation-name: cssAnimation;
-webkit-animation-duration: 3s;
-webkit-animation-iteration-count: 1;
-webkit-animation-timing-function: ease;
-webkit-animation-fill-mode: backwards;
}
.classname1 {
-webkit-animation-name: cssAnimation1;
-webkit-animation-duration: 3s;
-webkit-animation-iteration-count: 1;
-webkit-animation-timing-function: ease;
-webkit-animation-fill-mode: forwards;
}
@-webkit-keyframes cssAnimation1 {
from {
-webkit-transform: rotate(1deg) scale(1) skew(0deg) translate(20px);
}
to {
-webkit-transform: rotate(1deg) scale(1) skew(0deg) translate(20px);
}
}
@-webkit-keyframes cssAnimation {
from {
-webkit-transform: rotate(0deg) scale(2) skew(0deg) translate(100px);
}
to {
-webkit-transform: rotate(0deg) scale(2) skew(0deg) translate(100px);
}
}
**JAVASCRIPT PART**
var lastScrollTop = 0;
document.addEventListener("scroll", function(){
var value = window.pageYOffset || document.documentElement.scrollTop;
if (value > lastScrollTop){
scrollDownAnnimation();
} else {
scrollUpAnnimation();
}
lastScrollTop = value <= 0 ? 0 : st;
});
function scrollDownAnnimation() {
document.getElementById('img').className = 'classname';
}
function scrollUpAnnimation() {
document.getElementById('img').className = 'classname1';
}
**HTML PART: Please make sure you have added page content so that scroll can come on page**
<div>
<img id="img" src="https://i.stack.imgur.com/vghKS.png" width="328" height="328" />
</div>
For Demo: DEMO
Hope this help, Thanks! | unknown | |
d14024 | val | I discovered that on my local machine there was Visual Studio 2013 Update 5 while on the server TFS there was Visual Studio 2013 RTM (no update).
I resolved with a update of Visual Studio 2013 at last version (Update 5) on server where is installed TFS. | unknown | |
d14025 | val | In your .getCompanies() call right after the .map add a .retryWhen:
.retryWhen((errors) => {
return errors.scan((errorCount, err) => errorCount + 1, 0)
.takeWhile((errorCount) => errorCount < 2);
});
In this example, the observable completes after 2 failures (errorCount < 2).
A: You mean something like this?
this._api.getCompanies().subscribe(this.updateCompanies.bind(this))
updateCompanies(companies, exception) {
companies => this.companies = JSON.parse(companies),
exception => {
if(this._api.responseErrorProcess(exception)) {
// in case this retured TRUE then I need to retry()
this.updateCompanies(companies, exception)
}
}
} | unknown | |
d14026 | val | Go to file -->project structure-->click on app-->and on the right side 4tabs will appear and select build in that and enter your detail. That's it
Also important are these in your build.gradle file
android {
signingConfigs {
ProdSigningKey {
keyAlias 'any alias name'
keyPassword 'your actual password'
storeFile file('keystore file path on your computer')
storePassword 'your actual password'
}
}
} | unknown | |
d14027 | val | Assuming that your properties names and the dictionary keys are the same, you can use this function to convert any object
- (void) setObject:(id) object ValuesFromDictionary:(NSDictionary *) dictionary
{
for (NSString *fieldName in dictionary) {
[object setValue:[dictionary objectForKey:fieldName] forKey:fieldName];
}
}
A: this will be more convenient for you :
- (instancetype)initWithDictionary:(NSDictionary*)dictionary {
if (self = [super init]) {
[self setValuesForKeysWithDictionary:dictionary];}
return self;
}
A: Add a new initWithDictionary: method to Order:
- (instancetype)initWithDictionary:(NSDictionary*)dictionary {
if (self = [super init]) {
self.OrderId = dictionary[@"OrderId"];
self.Title = dictionary[@"Title"];
self.Weight = dictionary[@"Weight"];
}
return self;
}
Don't forget to add initWithDictionary's signature to Order.h file
In the method where you get JSON:
NSData *jsonData = [jsonString dataUsingEncoding:NSUTF8StringEncoding];
NSError *e;
NSDictionary *dict = [NSJSONSerialization JSONObjectWithData:jsonData options:nil error:&e];
Order *order = [[Order alloc] initWithDictionary:dict];
A: If the property names on your object match the keys in the JSON string you can do the following:
To map the JSON string to your Object you need to convert the string into a NSDictionary first and then you can use a method on NSObject that uses Key-Value Coding to set each property.
NSError *error = nil;
NSData *jsonData = ...; // e.g. [myJSONString dataUsingEncoding:NSUTF8Encoding];
NSDictionary *jsonDictionary = [NSJSONSerialization JSONObjectWithData:jsonData options:NSJSONReadingOptionsAllowFragments error:&error];
MyObject *object = [[MyObject alloc] init];
[object setValuesForKeysWithDictionary:jsonDictionary];
If the keys do not match you can override the instance method of NSObject -[NSObject valueForUndefinedKey:] in your object class.
To map you Object to JSON you can use the Objective-C runtime to do it automatically. The following works with any NSObject subclass:
#import <objc/runtime.h>
- (NSDictionary *)dictionaryValue
{
NSMutableArray *propertyKeys = [NSMutableArray array];
Class currentClass = self.class;
while ([currentClass superclass]) { // avoid printing NSObject's attributes
unsigned int outCount, i;
objc_property_t *properties = class_copyPropertyList(currentClass, &outCount);
for (i = 0; i < outCount; i++) {
objc_property_t property = properties[i];
const char *propName = property_getName(property);
if (propName) {
NSString *propertyName = [NSString stringWithUTF8String:propName];
[propertyKeys addObject:propertyName];
}
}
free(properties);
currentClass = [currentClass superclass];
}
return [self dictionaryWithValuesForKeys:propertyKeys];
}
A: The perfect way to do this is by using a library for serialization/deserialization
many libraries are available but one i like is
JagPropertyConverter
https://github.com/jagill/JAGPropertyConverter
it can convert your Custom object into NSDictionary and vice versa
even it support to convert dictionary or array or any custom object within your object (i.e Composition)
JAGPropertyConverter *converter = [[JAGPropertyConverter alloc]init];
converter.classesToConvert = [NSSet setWithObjects:[Order class], nil];
@interface Order : NSObject
@property (nonatomic, retain) NSString *OrderId;
@property (nonatomic, retain) NSString *Title;
@property (nonatomic, retain) NSString *Weight;
@end
//For Dictionary to Object (AS IN YOUR CASE)
NSMutableDictionary *dictionary = [[NSMutableDictionary alloc] init];
[dictionary setValue:self.OrderId forKey:@"OrderId"];
[dictionary setValue:self.Title forKey:@"Title"];
[dictionary setValue:self.Weight forKey:@"Weight"];
Order *order = [[Order alloc]init];
[converter setPropertiesOf:order fromDictionary:dictionary];
//For Object to Dictionary
Order *order = [[Order alloc]init];
order.OrderId = @"10";
order.Title = @"Title;
order.Weight = @"Weight";
NSDictionary *dictPerson = [converter convertToDictionary:person];
A: Define your custom class inherits from "AutoBindObject". Declare properties which has the same name with keys in NSDictionary. Then call method:
[customObject loadFromDictionary:dic];
Actually, we can customize class to map different property names to keys in dictionary. Beside that, we can bind nested objects.
Please have a look to this demo. The usage is easy:
https://github.com/caohuuloc/AutoBindObject | unknown | |
d14028 | val | since elastic search 2.3, FilterBuilders class has been removed from JavaAPI. you can use
QueryBuilder qb = QueryBuilders.boolQuery()
.must(QueryBuilders.matchQuery("_all", "JPMORGAN"))
.must(QueryBuilders.matchQuery(field, value)) ;
instead, and set it to
.setQuery(qb).
A: I think this will help.
SearchResponse response=
client.prepareSearch("your_index_name_here").setQuery(QueryBuilders.filteredQuery(QueryBuilders.matchAllQuery(),
FilterBuilders.andFilter(
FilterBuilders.termFilter("server","x"),
FilterBuilders.termFilter("dt_time","x")
))).addAggregation(
AggregationBuilders.terms("dt_timeaggs").field("dt_time").size(100).subAggregation(
AggregationBuilders.terms("cpu_aggs").field("cpu").size(100)
)
).setSize(0).get();
please verify. | unknown | |
d14029 | val | Remember : The Swing toolkit is pretty good at getting the system look and feel "almost right". If you really need the system feel, however, there are other options like SWT that are a little better suited.
If you want consistency then Swing can always default to the old school applet look which, although a little boring, is pretty reliable . The upside of this is that, when we roll back to this more simplistic looking gui toolkit, platform specific quirks magically seem to disappear .
I think you might find your problem solved quite easily if you simply set the look and feel to the standard swing style look and feel.
The Solution
I would suggest a call to
UIManager.setLookAndFeel(
UIManager.getCrossPlatformLookAndFeelClassName());
In the static code block where you initialize your application. | unknown | |
d14030 | val | Normally you would be able to set this with environment variables when you start the program or container. In Apache Superset, this is not possible. There is an ongoing discussion on Github about this issue. One GitHub user posts the problem and workaround, which is far from workable:
Daylight savings causes issues where users have to update datasource
timezone offset for each datasource twice per year.
So the only thing you can do is update the hours offset twice a year. To make matters even worse, if you use Postgresql, this may not even be possible due to a bug as described here. | unknown | |
d14031 | val | Your page is not even remotely valid HTML. For one thing, you have two body elements.
Check out W3C Validation of your page for more problems.
If a browser gets invalid HTML it makes its best guess at what the DOM should be (as opposed to a deterministic interpretation). Since browsers are designed by independent teams, these interpretations will differ. Then, when it comes to applying CSS, variations are bond to occur.
Get your HTML in order and then see what happens.
A: Older versions of IE are known to display pages slightly differently than most "modern" browsers. A quick Google search turned up the following lists of common differences:
http://www.wipeout44.com/brain_food/css_ie_bug_fixes.asp
http://css-tricks.com/ie-css-bugs-thatll-get-you-every-time/ | unknown | |
d14032 | val | NAudio can read information out of SoundFont files, but it does not include a SoundFont engine. For that you would need a good pitch shifting algorithm, some filters, and some voice management, as well as a sequencer if you wanted to play back MIDI files.
The closest I have come to building something like this is a demo I made for my NAudio Pluralsight course, in which I build a simple sampled piano based on some piano note recordings. If you are subscriber, you are free to use it. The technique I use is to load the sample into memory, connect a RawSourceWaveStream to it, convert it into a sample provider, and then pass it through a pitch shifter sample provider, based on the one I ported to C# for this open source project. | unknown | |
d14033 | val | This looks like a bug in Chrome.
I searched Chromium bugs and found a few that are similar:
*
*Issue 516127: Rendering artifacts on osx when something moves above the browser (dock, other windows, etc)
*Issue 473933: Visual rendering issue
*Issue 476909: Page didn't redraw correctly
*Issue 245946: Content isn't layout correctly when resizing window
But actually none of them seems to describe your problem exactly. If you think so too, you can report your bug here.
Note that this might be as well an issue related to your old Mac OS version or even the graphics card.
A: Try adding this just after <head> tag
<meta name="viewport"
content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no">
A: I had the same problem using cordova (hybrid html5/css3 mobile platform), when rotating the device the internal browser did not refresh correctly. I had a div containing everything with a fixed position. After some tries, I used style width/height: 100% and when rotating the browser refreshed correctly.
A: I second the suggestion to check plugins installed in case there's a conflict.
Looking on Google's forums, the issue has arisen before and was scheduled for a bugfix. Noticed while reading there that they also removed the support for browser window resizing via javascript :/
A: I'm guessing you have a retina display Mac and another external display connected? I see this sometimes on my setup and have tentatively narrowed it down to this situation. When using just the native screen I never see it and not in all screen configurations either.
Try and see if it happens when you use only the built in screen. If that works try switching which screen is the main screen or switch to another external screen altogether.
Sorry for the no fix, but maybe it will get you closer to a solution in your setup at least. | unknown | |
d14034 | val | Here is the comprehensive tutorial..
http://yajsw.sourceforge.net/
and one for windows service
https://docs.wso2.org/display/Carbon403/Installing+as+a+Windows+Service
A: You can play around with the scripts located in yajsw/bat, specifically with setenv.bat.
That is the script that creates your environment variables.
A: I found and look throught about 20 wrappers and helpers. Some of them is payd, some of them needs code modification, some complex, some have issues. I found only one good solution - NSSM . I think it solve the problems as it must be solved. | unknown | |
d14035 | val | Try this code:
-- declare a XML variable
DECLARE @XmlInput XML;
-- load the XML from the file into that XML variable
SELECT @XmlInput = CAST(c1 AS XML)
FROM OPENROWSET (BULK 'D:\Tasks\Test1.xml',SINGLE_BLOB) AS T1(c1)
-- extract the "Name" attribute and "INT10" element from the XML
SELECT
Name = XC.value('@Name', 'varchar(50)'),
Int10Value = XC.value('(INT10)[1]', 'varchar(100)')
FROM
@XmlData.nodes('/TextValuess/TextValues') AS XT(XC)
The call to .nodes() using the built-in, much preferred XQuery functionality (dump the OPENXML stuff - it's old and legacy and has memory leaks - XQuery is much easier to use, too!) returns a list of XML fragments - one for each match of the XPath expression in your document (here: one for each <TextValues> node under the root).
Then you reach into that XML fragment, and extract the name attribute (using the @Name expression), and the first (and only) <INT10> sub-element and convert those to "regular" values (with a datatype defined by the second parameter of the .value() call) | unknown | |
d14036 | val | For anyone experiencing the same problem, I was finally able to find a solution.
The problem is that GCE auth is set by the "gargle" package, instead of using the "normal user OAuth flow".
To temporarily disable GCE auth, I'm using the following piece of code now:
library(gargle)
cred_funs_clear()
cred_funs_add(credentials_user_oauth2 = credentials_user_oauth2)
gm_auth_configure(path = "credentials.json")
options(
gargle_oauth_cache = "secret",
gargle_oauth_email = "[email protected]"
)
gm_auth(email = "email.which.was.used.for.credentials.com")
cred_funs_set_default()
For further references see also here. | unknown | |
d14037 | val | Make sure you have following dependency on your pom.
<dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>sqljdbc4</artifactId>
<version>4.0</version>
</dependency> | unknown | |
d14038 | val | Write a method which will insert 1000 records and mark it as @Transactional(propagation = Propagation.REQUIRES_NEW)
@Transactional(propagation = Propagation.REQUIRES_NEW)
public void saveData(List<Object> data)..
Then, call that method few times, whenever the method is called, new transaction will be created. | unknown | |
d14039 | val | Linux SGX SSL Crypto Lib has now been open sourced and it's available here: https://github.com/01org/intel-sgx-ssl
A: I found an alternative solution to OpenSSL namely mbedtls here.
It is available for Linux and Windows and the compiled libraries only need to be linked against the application and enclave.
A: TaLoS is a TLS library that allows existing applications (with an OpenSSL/LibreSSL interface) to securely terminate their TLS connection inside an Intel SGX enclave. The code is available on GitHub.
There also is a technical report containing details about the architecture and performance results. | unknown | |
d14040 | val | You can use http client axios
Performing a POST request
var axios = require('axios');
axios.post('/user', {
firstName: 'Fred',
lastName: 'Flintstone'
})
.then(function (response) {
console.log(response);
})
.catch(function (error) {
console.log(error);
});
A: const request = require('request');
var url = 'blabla';
request.post(
url
, { json: { api: url } }
, function (err, res, bdy) {
if (!err && res.statusCode == 200)
console.log(bdy)
}
); | unknown | |
d14041 | val | In C++, const is really just logical constness and not physical constness. func1 can do a const_cast and modify i. const is like the safety of a gun - you can still shoot yourself in the foot, but not by accident.
As T.C. and juanchopanza have pointed out in the comments, casting away the constness of an object and modifying it is UB. Quoting from "Notes" here :
Even though const_cast may remove constness or volatility from any pointer or reference, using the resulting pointer or reference to write to an object that was declared const or to access an object that was declared volatile invokes undefined behavior.
A: Summing up the answers, I think this explains it best:
It is legal to take a const reference to a non-const variable, and then cast away the constness. Therefore, the compiler in the first case cannot assume that func1 will not change a.
It is undefined what happens if you cast away the constness to a variable declared const. The compiler in the second case may assume that func1 will not cast away the constness. If func1 does cast away the constness, func2 will receive the "wrong" value, but that's just one consequence of undefined behaviour. | unknown | |
d14042 | val | The message is saying that the member AudioInputDevices and the member VideoInputDevices are not declared as static in the type DirectX.Capture.Filters, but you are using them as if they were static.
To reference a member that's not static, you need to instantiate that type, by calling the constructor (directly, or indirectly via some kind of factory method) of that type (DirectX.Capture.Filters).
In other words, you need something like this:
var filters = new DirectX.Capture.Filters(...);
var capture = new Capture(filters.VideoInputDevices[0], filters.AudioInputDevices[0]); | unknown | |
d14043 | val | Since you use VS++, you can use:
_splitpath and _wsplitpath functions to break apart path
A: You can use the Windows shell API function PathRemoveFileSpec to do this. Example usage is listed on the linked page. | unknown | |
d14044 | val | I have two components A and B which are both are integrated on one
page. Both components need access to data set C.
As long as they are being used in one page, why you are calling them from each components? I think you can call it from the page component and send it to children.
Children can get the data using @Input() decorator. You can send the data directly or you can send it as observable (it's fine if it is behavior subject or subject).
I hope this could help you. | unknown | |
d14045 | val | Use traceback module. | unknown | |
d14046 | val | The form is trying to submit the provided data to the url: 'file:///android_asset/www/submit'.
The url is submitted via the action attribute inside the <form> tag:
<form action="submit" id="login" name="login_form">
To prevent this from happening just take the action attribute out of the tag.
Since you are new to Phonegap/Cordova you probably don't know that you shouldn't wait for the document.ready event but instead for the deviceready. Now when you change this, your form submit will probably no longer work. This is because the deviceready event fires once when the App is launched and ready to use. It doesn't wait for a button click or a function declared inside the deviceready function.
How can we get this to work again? That's simple, just call a function that when the submit button is clicked. You will need to do something like this:
<form id="login" name="login_form" onsubmit="mySubmitFunction();">
The called function should look something like this:
function mySubmitFunction(){
$("login").unbind("submit"); //this removes the submit event-handler from the form
if($("#uName").val()==""){
alert("Please fill username field.");
}
else if($("#password").val()==""){
alert("Please fill password field.");
}
else {
//insert into your database
db.transaction(populateDB, transaction_err, populateDB_success);
}
}
Inside the onDeviceReady function just open the database:
function onDeviceReady() {
db = window.openDatabase("UserDB", "1.0", "Login", 10000);
} | unknown | |
d14047 | val | Using multiple GPUs
If developing on a system with a single GPU, you can simulate multiple GPUs with virtual devices. This enables easy testing of multi-GPU setups without requiring additional resources.
gpus = tf.config.list_physical_devices('GPU')
if gpus:
# Create 2 virtual GPUs with 1GB memory each
try:
tf.config.set_logical_device_configuration(
gpus[0],
[tf.config.LogicalDeviceConfiguration(memory_limit=1024),
tf.config.LogicalDeviceConfiguration(memory_limit=1024)])
logical_gpus = tf.config.list_logical_devices('GPU')
print(len(gpus), "Physical GPU,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Virtual devices must be set before GPUs have been initialized
print(e)
NOTE: Virtual devices cannot be modified after being initialized
Once there are multiple logical GPUs available to the runtime, you can utilize the multiple GPUs with tf.distribute.Strategy or with manual placement.
With tf.distribute.Strategy best practice for using multiple GPUs, here is a simple example:
tf.debugging.set_log_device_placement(True)
gpus = tf.config.list_logical_devices('GPU')
strategy = tf.distribute.MirroredStrategy(gpus)
with strategy.scope():
inputs = tf.keras.layers.Input(shape=(1,))
predictions = tf.keras.layers.Dense(1)(inputs)
model = tf.keras.models.Model(inputs=inputs, outputs=predictions)
model.compile(loss='mse',
optimizer=tf.keras.optimizers.SGD(learning_rate=0.2))
This program will run a copy of your model on each GPU, splitting the input data between them, also known as "data parallelism".
For more information about distribution strategies or manual placement, check out the guides on the links.
A: The RAM complaint isn't about your system ram (call it CPU RAM). It's about your GPU RAM.
The moment TF loads, it allocates all the GPU RAM for itself (some small fraction is left over due to page size stuff).
Your sample makes TF dynamically allocate GPU RAM, but it could still end up using up all the GPU RAM. Use the code below to provide a hard stop on GPU RAM per process. you'll likely want to change 1024 to 8096 or something like that.
and FYI, use nvidia-smi to monitor your GPU ram usage.
From the docs:
https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth
gpus = tf.config.list_physical_devices('GPU')
if gpus:
# Restrict TensorFlow to only allocate 1GB of memory on the first GPU
try:
tf.config.set_logical_device_configuration(
gpus[0],
[tf.config.LogicalDeviceConfiguration(memory_limit=1024)])
logical_gpus = tf.config.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Virtual devices must be set before GPUs have been initialized
print(e) | unknown | |
d14048 | val | As @jasonharper pointed out, it's much easier to use a Scale in your case:
from tkinter import *
on_update = lambda e: print(e)
# |in pixels| |resolution| |the slider, pixels| |switch value display
s=Scale(command=on_update, length=250, to=1000, sliderlength=50, showvalue=False)
s.pack()
print(s.get())
s.set(20) # note, this also executes the "command" property
Hope that's helpful! | unknown | |
d14049 | val | I have the same problem. It seems that greendao is currently not able to do that. I am resorting to using queryRaw() instead. | unknown | |
d14050 | val | You need to run it with quotes and capital TRUE:
install.packages("rvest_0.3.5.tar.gz", dependencies = TRUE)
Note this will only work if you have unix-like system and the file is located in your current working directory (check with running getwd() from your R session). Otherwise you need to provide full path to the file (like "~/somefolder/mydata/rvest_0.3.5.tar.gz").
If you run Windows, then you need .zip file instead of .tar.gz.
If you are connected to internet, just run:
install.packages("rvest") | unknown | |
d14051 | val | Create a button after your content divs and call function on this button
<input type="button" value="Next" onclick="ShowNextTab();" />
function ShowNextTab() {
if ($('.nav-tabs > .active').next('li').length == 0) //If you want to select first tab when last tab is reached
$('.nav-tabs > li').first().find('a').trigger('click');
else
$('.nav-tabs > .active').next('li').find('a').trigger('click');
}
Below is a complete solution
HTML
<form id="form1" runat="server">
<div class="container">
<ul class="nav nav-tabs">
<li class="active"><a data-toggle="tab" href="#personal">Personal Information</a></li>
<li><a data-toggle="tab" href="#professional">Professional Information</a></li>
<li><a data-toggle="tab" href="#accountinformation">User Account Infromation</a></li>
</ul>
<div class="tab-content">
<div id="personal" class="tab-pane fade in active">
<div class="form-group">
<div class="row">
<div class="col-sm-12 col-md-12 col-lg-12">
<div class="col-sm-4">
<span class="Star-clr">*</span>First Name :
</div>
<div class="col-sm-8">
<asp:TextBox ID="txtName" runat="server" placeholder="First Name"></asp:TextBox>//close tag is missing
</div>
</div>
</div>
</div>
<div class="form-group">
<div class="row">
<div class="col-sm-12 col-md-12 col-lg-12">
<div class="col-sm-2">
</div>
<div class="col-sm-10" style="float: right">
<asp:Button ID="btnNext" Width="150" runat="server" Text="NEXT" />
</div>
</div>
</div>
</div>
</div>
<div id="professional" class="tab-pane fade">
</div>
<div id="accountinformation" class="tab-pane fade">
</div>
<input type="button" value="Next" onclick="ShowNextTab();" />
<input type="button" value="Prev" onclick="ShowPrevTab();" />
</div>
</div>
</form>
JavaScript
function ShowNextTab() {
$('.nav-tabs > .active').next('li').find('a').trigger('click');
}
function ShowPrevTab() {
$('.nav-tabs > .active').prev('li').find('a').trigger('click');
} | unknown | |
d14052 | val | Try DumpRenderTree - headless chrome which outputs a textual version of layout.
eg:
Content-Type: text/plain
layer at (0,0) size 808x820
RenderView at (0,0) size 800x600
layer at (0,0) size 800x820
RenderBlock {HTML} at (0,0) size 800x820
RenderBody {BODY} at (8,8) size 784x804
RenderHTMLCanvas {CANVAS} at (0,0) size 800x800 [bgcolor=#808080]
RenderText {#text} at (0,0) size 0x0
#EOF
#EOF
Kevin Moore's blog post "Headless Browser Testing with Dart" explains the details (and the above snippet is taken from that) | unknown | |
d14053 | val | i was expecting simple error, i missed to add permission for read storage. it is working now. | unknown | |
d14054 | val | This is most likely an optimization feature of your compiler. For example, when I compiled your code using CL (MSVC compiler) without any optimization option, I got the following results:
But turning on fast code option, resulted in a more optimized memory usage:
Commands for disabled and fast code options of CL, respectively:
>cl /OD app.cpp
>cl /O2 app.cpp
You should refer to your compiler documentations to figure out the optimization options. | unknown | |
d14055 | val | It is mainly for performance and well as ease of use.
When you use an external library inside PHP using system() for e.g., then the pros are that you will be able to use ALL of its options, which will make you a power user. The cons are that, each time you run it, you have to do like a parsing of the return string and then figure out the results and stuff, which is a hassle and pretty error prone.
When you use a language-binding of the external library, then cons are that you are confined to the API calls that the binding provides. The pros are that the return values, the error status etc will be well defined within the API calls so handling the calls will be easier.
This is usually a tradeoff and it will vary from case to case as to whether one should use native interfaces or just execute the library directly. | unknown | |
d14056 | val | Right click on Add Reference of your project and browse to your path of Msctf.dll
link (Register COM) : http://msdn.microsoft.com/en-us/library/ms859484.aspx | unknown | |
d14057 | val | Or you could try the FaceDetector class. Its available since API Level 1.
A: Try attach native libraries for OpenCV to your project and use OpenCVLoader.initDebug(); to initialization. | unknown | |
d14058 | val | Move the assignment of
string item;
item = "Empty space";
Before the while loop.
Right now, every time you loop you overwrite the item value.
Here's how the whole code would look after the change:
static void Main(string[] args)
{
bool isRunning = true;
string item = "Empty space";
while (isRunning)
{
Console.WriteLine("\n\tWelcome to the backpack!");
Console.WriteLine("\t[1]Add an item");
Console.WriteLine("\t[2]Show contents");
Console.WriteLine("\t[3]Clear contents");
Console.WriteLine("\t[4]Exit");
Console.Write("\tChoose: ");
int menyVal = Convert.ToInt32(Console.ReadLine());
switch (menyVal)
{
case 1:
Console.WriteLine("\n\tContents of backpack:");
Console.WriteLine("\n\t" + item);
Console.WriteLine("\n\tWhat do you want to replace " + item + " with?");
item = Console.ReadLine());
Console.WriteLine("\n\tYou have packed " + item + " in your backpack");
break;
case 2:
Console.WriteLine("\n\tContents of backpack:");
Console.WriteLine("\n\t" + item);
Console.WriteLine("\n\tPress any key...");
Console.ReadKey();
break;
case 3:
item = "Tom plats";
Console.WriteLine("\n\tYou have emptied the backpack!");
break;
case 4:
isRunning = false;
break;
default:
Console.WriteLine("Incorrect input!");
break;
}
}
} | unknown | |
d14059 | val | H"
end tell
end tell
set cellNumber to 2
tell application "Microsoft Excel"
activate
repeat
set fileName to get value of cell ("B" & cellNumber) as string
set fncount to count characters of fileName
if fncount is greater than 13 then
delete entire row of cell ("B" & cellNumber)
set endCount to 0
else
set endCount to endCount + 1
if endCount > 100 then
exit repeat
end if
end if
set cellNumber to cellNumber + 1
end repeat
end tell
set endCount to 0
A: This does not delete everything, because when the script delete a row, Excel will shift the rows up.
Example: the script delete the second row, now the second row is
the third row, so the script skip a row
To avoid that, the loop must start at the index of the last row.
Use the used range property to get the last row.
tell application "Microsoft Excel"
activate
open (choose file with prompt "Select the Excel file you wish to use.")
tell active sheet
set cellNumber to 2
autofit column "A:H"
set lastR to count rows of used range -- get the index of the last row which contains a value
repeat with i from lastR to cellNumber by -1 -- iterates backwards from the index of the last row
set fileName to string value of cell ("B" & i)
if (count fileName) > 13 then delete row i
end repeat
end tell
end tell | unknown | |
d14060 | val | If you are stuck with starting with the StringBuilder then I think you've pretty much worked out what you need to do.
I would make it a little cleaner like this though:
var prefix = "SELECT ";
var suffix = " From fruit_table";
var result =
String.Format("{0}{2}{1}",
prefix,
suffix,
String.Join(",",
items
.ToString()
.Replace(prefix, "")
.Replace(suffix, "")
.Split(',')
.Select(x => x.Trim())
.Distinct()));
items.Clear();
items.Append(result);
Before:
SELECT Apple,Carrot,Pear,Orange,Apple From fruit_table
After:
SELECT Apple,Carrot,Pear,Orange From fruit_table
If you know that there are no spaces between the names of the columns, then this is slightly cleaner:
var result =
String.Format("{0}{2}{1}",
prefix,
suffix,
String.Join(",",
items
.ToString()
.Split(' ')[1]
.Split(',')
.Distinct()));
A: Here's another way to do it without having to hard-code the prefix and suffix elements:
// Encapsulate the behavior in an extension method we can run
// directly on a StringBuilder object
public static StringBuilder DeduplicateColumns(this StringBuilder input) {
// Assume that we can split into large "chunks" on spaces
var sections = input.ToString().Split(' ');
var resultSections = new List<string>();
foreach (var section in sections) {
var items = section.Split(',');
// If there aren't any commas, spit this chunk back out
// Otherwise, split on the commas and get distinct items
if (items.Count() == 1)
resultSections.Add(section);
else
resultSections.Add(string.Join(",", items.Distinct()));
}
return new StringBuilder(string.Join(" ", resultSections));
}
Test code:
var demoStringBuilder = new StringBuilder
("SELECT Apple,Carrot,Pear,Orange,Apple From fruit_table");
var cleanedBuilder = demoStringBuilder.DeduplicateColumns();
// Output: SELECT Apple,Carrot,Pear,Orange From fruit_table
Here's a fiddle: link | unknown | |
d14061 | val | with open("testfile.txt", "r") as r:
with open("testfile_new.txt", "w") as w:
w.write(r.read(.replace(' ' , '\n'))
A: Example:
with open("file1.txt", "r") as read_file:
with open("file2.txt", "w") as write_file:
write_file.write(read_file.read().replace(" ", '\n'))
Content of file1.txt:
15.9 17.2 18.6 10.5
Content of file2.txt:
15.9
17.2
18.6
10.5
NOTE:
Or you can use the split and join method instead of replace.
write_file.write("\n".join(read_file.read().split()))
A: try like this instead
f = open("testfile.txt", "r")
text=f.read()
f.close()
f=open("testfile.txt", "w+")
text2=''
if ' ' in text:
text2 = text.replace(' ' , '\n')
print(text2)
f.write(text2)
f.close()
A: You can try using string replace:
string = string.replace('\n', '').replace('\r', '')
Firstly: f.close() is not there.
Secondly: try above code. It will replace spaces to new lines.
A: Use str.split with str.join
Ex:
with open("testfile.txt", "r") as infile:
data = infile.read()
with open("testfile.txt", "w") as outfile:
outfile.write("\n".join(data.split())) | unknown | |
d14062 | val | The default settings for ffmpeg do not always provide a good quality output when you encode, but this depends on your output format and the available encoders. With your output ffmpeg will use the default of -b 200k or -b:v 200k.
However, you can tell ffmpeg to simply copy the input streams without re-encoding and this is recommended if you just want to add or edit metadata. These examples do the same thing but use different syntax depending on your ffmpeg version:
ffmpeg -i hk.avi -vcodec copy -acodec copy -metadata title="SOF" hk_titled.avi
ffmpeg -i hk.avi -c copy -metadata title="SOF" hk_titled.avi | unknown | |
d14063 | val | I copied your code into an ionic stackblitz project and was unable to reproduce your issue.
https://stackblitz.com/edit/ionic-ojdypw
Maybe there is something there that can help you. | unknown | |
d14064 | val | Specifics of the solution might depend on the Prolog dialect. Here I am using SWI-Prolog. SWI-Prolog allows you to open a file with open(SrcDest, Mode, Stream), where SrcDest will be your file name, Mode is read/write/append/update, and Stream is the "file descriptor" the system will return. The manual clarifies difference between appending and updating as follows: "Mode append opens the file for writing, positioning the file-pointer at the end. Mode update opens the file for writing, positioning the file-pointer at the beginning of the file without truncating the file."
To copy from one stream to another you should use copy_stream_data(Stream1,Stream2).
Finally, you should close the streams, otherwise the output file will be empty.
Putting everything together gives
copy(File1,File2) :- open(File1,read,Stream1), open(File2,write,Stream2),copy_stream_data(File1,File2),close(File1),close(File2).
If you need to rewrite the second file, just use update/append mode. | unknown | |
d14065 | val | User is a reserved word and must be bracketed:
"select * from [User]" | unknown | |
d14066 | val | <FormControl variant="outlined" className={classes.formControl}>
<InputLabel id="uni">UNI</InputLabel>
<Select
key={value}
defaultValue={value}
labelId="uni"
id="uni"
name="uni"
onBlur={onChange}
label="uni"
>
{unis.map((u, i) => (
<MenuItem value={u.value} key={i}>
{u.label}
</MenuItem>
))}
</Select>
</FormControl>;
A: Just use the defaultValue attribute of select.
You can refer to the Material UI Select API Docs to know more.
import React from 'react';
import {useState} from 'react';
import FormControl from '@material-ui/core/FormControl';
import InputLabel from '@material-ui/core/InputLabel';
import Select from '@material-ui/core/Select';
import MenuItem from '@material-ui/core/MenuItem';
const Selector = () => {
const [Value, setValue] = useState("1"); // "1" is the default value in this scenario. Replace it with the default value that suits your needs.
const handleValueChange = event => {
setValue(event.target.value);
}
return(
<FormControl>
<InputLabel id="Input label">Select</InputLabel>
<Select
labelId= "Input label"
id= "Select"
value= {Value}
defaultValue= {Value}
onChange= {handleValueChange}
>
<MenuItem value="1">Item1</MenuItem>
<MenuItem value="2">Item2</MenuItem>
<MenuItem value="3">Item3</MenuItem>
</Select>
</FormControl>
)
};
export default Selector;
A: If you take a look at the Select Api of Material UI here, you could do it easily.
*
*As explained above, you need to pass the default value in your state variable:
const [age, setAge] = React.useState(10);// <--------------(Like this).
*Set displayEmpty to true:
If true, a value is displayed even if no items are selected.
In order to display a meaningful value, a function should be passed to the renderValue prop which returns the value to be displayed when no items are selected. You can only use it when the native prop is false (default).
<Select
displayEmpty
/>
A: You need to provide correct MenuItem value in state to be matched on render.
Here is the working codesandbox: Default Select Value Material-UI
A: You can just pass the displayEmpty into select
<Select
id="demo-simple-select-outlined"
displayEmpty
value={select}
onChange={handleChange}
>
and define the menuItem like
<MenuItem value=""><Put any default Value which you want to show></MenuItem>
A: As React introduced React-Hooks, you just need to pass your default value in React.useState() as React.useState(10).
export default function CustomizedSelects() {
const classes = useStyles();
const [age, setAge] = React.useState(10);// <--------------(Like this).
const handleChange = event => {
setAge(event.target.value);
};
return (
<form className={classes.root} autoComplete="off">
<FormControl className={classes.margin}>
<Select
value={age}
className={classes.inner}
onChange={handleChange}
input={<BootstrapInput name="currency" id="currency-customized-select" />}
>
<MenuItem value={10}>Ten</MenuItem>
<MenuItem value={20}>Twenty</MenuItem>
<MenuItem value={30}>Thirty</MenuItem>
</Select>
</FormControl>
</form>
);
}
A: I had a similar issue. In my case, I applied a function directly to onChange so I had something like this:
export default function CustomizedSelects() {
const classes = useStyles();
const [age, setAge] = React.useState(10);
return (
<form className={classes.root} autoComplete="off">
<FormControl className={classes.margin}>
<Select
value={age}
className={classes.inner}
onChange={(event) => setAge(event.target.value)}
input={<BootstrapInput name="currency" id="currency-customized-select" />}
>
<MenuItem value={10}>Ten</MenuItem>
<MenuItem value={20}>Twenty</MenuItem>
<MenuItem value={30}>Thirty</MenuItem>
</Select>
</FormControl>
</form>
);
}
I also had a separate button to clear select value (or select the default empty value). All was working, the select value was set correctly, except the Select component did not animate to its default form - when no value is selected. I fixed the problem by moving onChange to a handleChange separate function as it is in the code example @B4BIPIN is presenting.
I am not an expert in React and still learning and this was a good lesson. I hope this helps ;-)
A: Take a list of objects you want to display in the Select dropdown and initialise it using useState. Use the state now to show the value and update the state on the change of the dropdown.
const ackList = [
{
key: 0,
value: "Not acknowledged",
},
{
key: 1,
value: "Acknowledged",
},
];
function AcknowledgementList() {
//state to initialise the first from the list
const [acknowledge, setAcknowledge] = useState(ackList[1]);
//update the state's value on change
const handleChange2 = (event) => {
setAcknowledge(ackList[event.target.value]);
};
return (
<TextField
select
fullWidth
value={acknowledge.key}
onChange={handleChange2}
variant="outlined"
>
{ackList.map((ack) => (
<MenuItem key={ack.key} value={ack.key}>
{ack.value}
</MenuItem>
))}
</TextField>
);
}
A: The problem here is all to do with some pretty poor coding on the part of the MUI folks, where on several components, they have magic strings and really are doing silly things.
Let's take a look at the state here:
const [age, setAge] = React.useState('3');
You can see that we are having to specify the VALUE as a string. Indeed the data type that the Select control takes is a string | undefined. So the fact we are having to use a number value as a string is the source of confusion.
So how does that work?
It is all to do with the MenuItem component. Let's take a look:
<MenuItem value={1}>First Choice</MenuItem>
<MenuItem value={2}>Second Choice</MenuItem>
<MenuItem value={3}>Third Choice</MenuItem>
You can see that we are indeed having to specify the VALUE of the MenuItem as a number.
So in this case, specifying '3' as the State value, as a string, will select the Third Choice on load.
You can set the VALUE in the Select control as the state value.
Don't forget, when handling the onChange event, that you will need to convert the event.target.value to string. | unknown | |
d14067 | val | Put the Grid inside of a Viewbox and change the size of the Viewbox instead of the Grid.
<Viewbox>
<Grid Clip="M10,10 L10,150 L150,150 L150,10 Z" Width="200" Height="200">
<Rectangle Fill="Red"/>
</Grid>
</Viewbox>
A: An alternative approach to this is to define the clipping path using element rather than attribute syntax, and then use the same transformation on the clip as you apply to the element as a whole, e.g.:
<Grid.Clip>
<PathGeometry FillRule="Nonzero" Transform="{Binding Path=MatrixTransform, RelativeSource={RelativeSource TemplatedParent}, Mode=OneWay}">
<PathFigure StartPoint="715, 96.3333" IsClosed="True" IsFilled="True">
<PolyLineSegment IsStroked="False">
<PolyLineSegment.Points>
<Point X="1255.2526" Y="540" />
<Point X="426.3333" Y="1342.3333" />
<Point X="64.66666" Y="7356.6666" />
</PolyLineSegment.Points>
</PolyLineSegment>
</PathFigure>
</PathGeometry>
</Grid.Clip> | unknown | |
d14068 | val | use this to fix the problem.
Pattern p = Pattern.compile("\\bthis\\b");
Matcher m = p.matcher("Print this");
m.find();
System.out.println(m.group());
Output:
this | unknown | |
d14069 | val | switch (a) will compare the code of a. If you typed digits, it should be;
case '0':
num_0++;break;
case '1':
num_1++;break;
...
switch on character values not integers (int value of 0 is not 0, for example in ASCII it is 48, but let's not use the value directly, so it's fully portable)
Maybe a better thing to do would be to create a table instead:
int count[10] = {0};
....
a -= '0'; // removes the offset
if ((a >= 0) && (a < 10)) // check bounds
{
count[a]++;
}
A: Your answer is in man getchar (emphasis mine)
fgetc() reads the next character from stream and returns it as an unsigned char cast to an int, or EOF on end of file or error.
getc() is equivalent to fgetc() except that it may be implemented as a macro which evaluates stream more than once.
getchar() is equivalent to getc(stdin).
After that, the numerical representation of a character need not be the same as the character value (and mostly, is not), i.e., a character 0 ('0') does not have the numerical value of 0, in ASCII encoding, it has a decimal value of 48.
So, to count a character '0', you should set the case values to '0', not 0.
A: a = getchar()
When you are reading characters, the value gets stored in variable is the ASCII value. When you type 1 the value getting stored in a is 49.
switch (a)
{
case 1:
num_0++;
break;
}
In this code you are comparing with ASCII value 49(ASCII value for 1) with ASCII value 1 and comparison return false.
Similarly all the conditions give false and goes to default case.
A: There are some mistake in your code .
1) You are taking input through getchar() function , so it takes character in the input .
2) Take the input variable 'a' as char not int .
3) Mark single quotes in case options like case '1' because you are taking input as a character so you must mark single quotes . | unknown | |
d14070 | val | I'm not sure what your intent is with the GlobalEnv, but this might be of help:
swapped = data.frame(t(xts))
ordered = swapped[with(swapped, order(Historical.VaR..95..)),]
result = subset(ordered, select=Historical.VaR..95..) | unknown | |
d14071 | val | I have opened a bug report with Apple now; will see what their answer is... | unknown | |
d14072 | val | You will need to create a service to keep track of the anwers, yes you are correct when the route changes answers array will be overwritten.
calculonApp.service('AnswerService', function() {
var answers = [];
this.addAnswers = function(questionId, a) {
answers.push({
'question':questionId,
'answer':a
});
}
return this;
});
calculonControllers.controller('testYourself', ['$scope', '$routeParams', 'AnswerService'
function($scope, $routeParams, AnswerService) {
$scope.quiz = [
{name:"a", answer: [{0: '1.', 1: '2'}], weight:25},
{name:"b", answer: [{0: '1', 1: '2'}], weight:25}
];
$scope.question = $scope.quiz[$routeParams.questionId];
$scope.questionId = parseInt($routeParams.questionId);
AnswerService.addAnswers($scope.questionId, a);
}]); | unknown | |
d14073 | val | I have still to test it, but the copytruncate option of logrotate should do. | unknown | |
d14074 | val | You can do everything you want with altering CSS class :
To hide event title:
.fc-event-time {
display: none;
}
If you want to keep time of events but keep the same background between title and body, you should unset opacity:
.fc-event-vert .fc-event-bg {
opacity: 0;
}
A: Somewhere along the lines of 3665 inside the fullCalendaer normal version look for this code
(!event.allDay && seg.isStart ?
"<span class='fc-event-time'>" +
htmlEscape(formatDates(event.start, event.end, opt('timeFormat'))) +
"</span>"
:'') +
"<span class='fc-event-title'>" + htmlEscape(event.title) + "</span>" +
Remove
"<span class='fc-event-title'>" + htmlEscape(event.title) + "</span>" +
So that it still makes sense i jscript syntax and viola! no more title just time.
A: With this, install the module Auto Entity Labels or Auto Nodetitles and when you go and edit the Content Type created, you'll see tabs such as "Edit", "Manage Fields", "Auto Label", etc...
When you click on "Auto Label', set the Automatic Label Generation to "Automatically generate the label and hide the label field" and set the Pattern for the title to <none>.
Now when you create content from the content type, i.e. FullCalendar, it will just show the timeslot you've entered without the title of the node. | unknown | |
d14075 | val | Maybe that's because in your catch you are stating that valid is true when it should be false to repeat the block. | unknown | |
d14076 | val | Internet Explorer is surely using the MSXML library. Set the TXmlDocument.DomVendor property to MSXML_DOM (found in the msxmldom unit), and you should get the same behavior. You can also change the DefaultDOMVendor global variable to SMSXML to make all new TXmlDocument objects use that vendor.
A: Have you already tried OmniXML? I've been using it for years and it always solved my problems regarding XML files. If you haven't, I'd advice you to give it a try: it's simple to use, light and free.
A: Internet Explorer use XmlResolver, The XmlResolver property of the XmlDocument is used by the XmlDocument class to locate resources that are not inline in the XML data, such as external document type definitions (DTDs), entities, and schemas. These items can be located on a network or on a local drive, and are identifiable by a Uniform Resource Identifier (URI). This allows the XmlDocument to resolve EntityReference nodes that are present in the document and validate the document according to the external DTD or schema.
you should use a delphi library that implements a resolver and parser to external resources.
Open XML implements a resolver using TStandardResourceResolver
Bye.
A: The following solved the problem for me. It seems that Delphi default parser (MSXML) actually includes external entity references but in a somehow strange way. For this example
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE module [
<!ENTITY Schema65 SYSTEM "schemas/65.xml">
]>
<module>
<schema>&Schema65;</schema>
</module>
I assumed that creating an TXMLDocument and that the external file contains a simple text I could get the contents of the file like this:
MyXML := TXMLDOcument.Create(myfile.xml);
ExternalText := MyXML.documentElement.ChildNodes['schema'].Text;
This actually works if the entity reference is replaced with the simple text. However, in case of using the external entity Delphi will create a new child of type "ntEntityRef" inside the "schema" node. This node will also have a child which finally contains the simple text I expected. The text can be accesses like this:
MyXML.documentElement.ChildNodes['schema'].FirstChild.FirstChild.Text;
In case the external entity file contains a node structure, the corresponding nodes will be created inside the entity reference node. Make sure TXMLDocument.ParseOptions are set to at least to [poResolveExternals] for that to happen. This approach also makes it relatively easy to adapt the code generated by the XML Data Binding Wizard to work with external entities. | unknown | |
d14077 | val | I will rather fix the design issue as a permanent fix rather than wasting time on the workaround.
Firstly, NEVER store DATE as VARCHAR2. All this overhead is due to the fact that your design is flawed.
'20100231'
How on earth could that be a valid date? Which calendar has a 31 days in FEBRUARY?
Follow these steps:
*
*Add a new column with DATE DATA TYPE.
*Update the new column with date values from the old column using TO_DATE.
*Do the required DATE arithmetic on the new DATE column, or handle this in the UPDATE statement in step 2 itself.
*Drop the old column.
*Rename the new column to the old column.
UPDATE Adding a demo
Setup
SQL> CREATE TABLE t
2 (ymd varchar2(8));
Table created.
SQL>
SQL> INSERT ALL
2 INTO t (ymd)
3 VALUES ('20101112')
4 --INTO t (ymd)
5 -- VALUES ('20100231')
6 INTO t (ymd)
7 VALUES ('20150101')
8 INTO t (ymd)
9 VALUES ('20160101')
10 SELECT * FROM dual;
3 rows created.
SQL>
SQL> COMMIT;
Commit complete.
SQL>
Add new column:
SQL> ALTER TABLE t ADD (dt DATE);
Table altered.
SQL>
DO the required update
SQL> UPDATE t
2 SET dt =
3 CASE
4 WHEN to_date(ymd, 'YYYYMMDD') > SYSDATE
5 THEN NULL
6 ELSE to_date(ymd, 'YYYYMMDD')
7 END;
3 rows updated.
SQL>
SQL> COMMIT;
Commit complete.
SQL>
Let's check:
SQL> SELECT * FROM t;
YMD DT
-------- ---------
20101112 12-NOV-10
20150101 01-JAN-15
20160101
SQL>
Drop the old column:
SQL> ALTER TABLE t DROP COLUMN ymd;
Table altered.
SQL>
Rename the new column to old column name
SQL> ALTER TABLE t RENAME COLUMN dt TO ymd;
Table altered.
SQL>
You have just fixed the issue
SQL> SELECT * FROM t;
YMD
---------
12-NOV-10
01-JAN-15
SQL> | unknown | |
d14078 | val | Standard Drupal will only allow you to specify the placement of blocks once. To achieve what you're after you'll need to look into using a contributed module like Context or Panels.
Personally used Context a fair bit in the past. It's pretty powerful but relatively simple to use.
A: This module allows you to create several instances of a block, so you can duplicate a block and put the duplicates on other regions. | unknown | |
d14079 | val | You have an unclosed div at the end of your block (which should be the closing tag), the browser closes it automatically, as well as the parent one. So two last lines:
</div>\n\
<div>'
should be:
</div>\n\
</div>'
A: Ok, I found it:
</div>\n\
<div>' <!-- this one was extraneous -->
}; | unknown | |
d14080 | val | You can check docs and comments for shared_task on github https://github.com/celery/celery/blob/9d49d90074445ff2c550585a055aa222151653aa/celery/app/init.py
I think for some reasons you do not run creating of celery app. It is better in this case use explicit app.
from .celery import app
@app.task()
def add(x, y):
return x + y | unknown | |
d14081 | val | I can't answer your question about why your processing is getting delayed, but regarding your question about getting faster input, try using Raw Input instead. That will allow the keyboard to send its own keystroke events directly to you so you do not have to wait for the OS to receive, interpret, and dispatch the keystrokes to you, which takes more time.
A: The straight answer is yes, delays of the order of 50 ms or so are common in processing key presses through the normal Windows message queue. I believe there are multiple sources of these delays.
First, the keyboard itself has a serial interface. Historically this was very slow, and probably tied to the underlying 55ms clock. A USB keyboard is likely to be much faster but the point is that each individual key press or release will be sent and processed individually, even if they appear to absolutely simultaneous.
The Windows code paths leading to the processing of Windows messages are long and the flow of intervening messages can be high. If there are many messages to process your simultaneous key releases may become separated by many other messages. You can reduce this by peeking messages instead of waiting for them, to a point.
So you really will have to use Raw Input, you're going to need to handle events quickly and you still need to anticipate delays. My guess is you're going to need to 'debounce' your input to the tune of at least 20ms to get smooth behaviour. There is lots of reading out there to help you on your way. | unknown | |
d14082 | val | Yes, in settings, tap ssl verification off
File > Settings > General > SSL Certificate Verification > off | unknown | |
d14083 | val | As far as i know ProFTPD does not contain its own users, but rather uses external resources to authenticate. That means that if you want to edit a user (or it's password) you need to edit whatever source ProFTPD authenticated that user against (i.e. /etc/passwd, PAM, LDAP, etc).
This, unfortunately for you, means that you can not edit your password from within an FTP session, but rather have to access the server via SSH or similar to change it.
More info can be found in the documentation: http://www.proftpd.org/docs/howto/Authentication.html | unknown | |
d14084 | val | Are you sure you are getting data ? Your substr() must be returning empty strings.
You are adding a slash to your day and month and putting them back together in the wrong order. Just run your code with a fixed string:
$dob = 'dd/mm/yyyy';
$dd = substr($dob,0,2)."/";
$mm = substr($dob,3,2)."/";
$yyyy = substr($dob,6,4);
$fd = $yyyy.$mm.$dd;
var_dump($fd);
Result:
string(10) "yyyymm/dd/"
To me, $dob is clearly empty, as all three variables resulting from the substr() are as well, except for the slashes you add back, which is what you get on the error. Run the code again with an empty variable and you'll get: string(2) "//".
Once you fix your $dob issue, you can use the DateTime class directly as suggested by Chayan:
$date = DateTime::createFromFormat('d/m/Y', $dob);
echo $date->format('Y-m-d');
A: By default PHP can't parse date having '/' in it. Use DateTime::createFromFormat function.
$date = DateTime::createFromFormat('Y/m/d', $fd);
echo $date->format('Y-m-d'); | unknown | |
d14085 | val | Maybe something like a runnable:
private Handler handler = new Handler();
handler.postAtTime(timeTask, SystemClock.uptimeMillis() + 500);
private Runnable timeTask = new Runnable() {
public void run() {
//do stuff here
//do it again soon
handler.postAtTime(timeTask, SystemClock.uptimeMillis() + 500);
}
};
Before you leave make sure to stop the tasks:
handler.removeCallbacksAndMessages(null);
A: use a Handler to update it ,
Handler handler=new Handler();
int FREQ=5000; // the update frequency
handler.postDelayed(new Runnable()
{
public void run()
{
try
{
String currStr; // get your next song name
currPlayView.setText(currStr);
}
finally
{
handler.postDelayed(this, FREQ);
}
}
}, FREQ); | unknown | |
d14086 | val | I'm afraid at this very moment Google Wallet only notifies the user when the subscription is cancelled.
I asked the same myself on google wallet for digital goods forum :
https://groups.google.com/forum/?fromgroups=#!topic/in-app-payments/YFaCBDwaF9g
See the 2nd answer from Mihai Ionescu from Google
EDIT: As suggested by Qix below, I'm quoting the answer given by Google below:
*
*After the subscription is setup, you will receive a postback only when the subscription is cancelled:
https://developers.google.com/in-app-payments/docs/subscriptions#6
*Currently the merchant can cancel or refund a subscription from the Merchant Center. We are working on adding API support for
cancellations and refunds.
Please note that the forum entry is from late 2012, however as of May 2014 things doesn't seem to have changed much, as Google still postbacks only for Subscription Cancellations | unknown | |
d14087 | val | It should be enough to have spring-boot-starter-web dependency, this by default includes Tomcat. You might be missing the dependencies when running the application e.g. see that SpringBootServletInitializer is present and running.
Take a look at bazel-springboot-rule project and springboot.bzl
Packager which package Spring Boot application as runnable JAR using Bazel (in similar way it's done by Maven and Gradle). It's more or less:
load("//tools/springboot:springboot.bzl",
"springboot",
"add_boot_web_starter"
)
add_boot_web_starter(app_deps)
springboot(
name = "spring-boot-sample",
boot_app_class = "com.main.Application",
deps = app_deps
) | unknown | |
d14088 | val | Use CSVRecordReader with the label appended to the end of each row as an integer with 0 to 9.
Use convolutionalFlat as the setInputType at the bottom.
Example snippet:
.setInputType(InputType.convolutionalFlat(28,28,1))
.backprop(true).pretrain(false).build();
Whole code example for the neural net config:
https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/convolution/LenetMnistExample.java | unknown | |
d14089 | val | You can use window functions. For each product, you can identify groups of adjacent matching rows by counting the number of non-10s before that row. This identifies the groups.
select name, sum(case when sale = 10 then 1 else 0 end0 as cnt
from (select t.*,
sum(case when sale <> 10 then 1 else 0 end) over (partition by product order by date) as grp
from t
) t
group by name,
(case when sale <> 10 then 1 else 0 end),
grp;
This returns the value for all groups. I think you might want the longest, which would be:
select name, max(cnt)
from (select name, sum(case when sale = 10 then 1 else 0 end0 as cnt
from (select t.*,
sum(case when sale <> 10 then 1 else 0 end)
over (partition by product
order by date
rows between unbounded preceding and current row
) as grp
from t
) t
group by name,
(case when sale <> 10 then 1 else 0 end),
grp
) t
group by name; | unknown | |
d14090 | val | After much work and research, I discovered that the file I was trying to check the Content-Length tag on was chunked encoded which takes that tag away. The Apache server the files are hosted on automatically chunks .txt files that are too large. A workaround to this problem was to simply change the file extension from .txt to a .bin file. Doing that gave me the Content-Length header allowing me to check the size of the file without having to actually download the file. Apparently, the automated chunked encoding is something that is standard in HTTP 1.1. I'm not sure if that's the best way to get around this issue, but it worked in my case. Hope this helps someone else.
A: You're doing this the wrong way. You should be using a HEAD request. That way nothng is sent in reply except the headers, so no chunk encoding, and no Range header is necessary either, | unknown | |
d14091 | val | Question 1: I'm not sure why, but having multiple versions of R on your PATH can lead to unexpected situations like this. /usr/local/bin is usually ahead of /usr/bin in the PATH, so I would've expected R 3.6.3 to be found. Perhaps it has to do with Question 2.
Question 2: Some distros (like CentOS/RHEL) don't put /usr/local/bin on the PATH by default when using sudo. See https://unix.stackexchange.com/questions/8646/why-are-path-variables-different-when-running-via-sudo-and-su for details. The answers there describe several ways to add /usr/local/bin to the PATH when using sudo -- for example, modifying secure_path in /etc/sudoers to include /usr/local/bin like:
Defaults secure_path = /usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
With R 3.6.3 ahead of the default system R in the PATH, you shouldn't have to delete /bin/R or /usr/bin/R. But eventually, I'd recommend installing multiple side-by-side versions of R the same way, using https://docs.rstudio.com/resources/install-r/, so it's easier to manage. The next time you install a new R version, you can just replace the symlinks in /usr/local/bin. The default system R (from EPEL) is meant to be the only R on a system, with in-place upgrades.
If you want to replace the default R 3.5.2 with a side-by-side R 3.5.2 (or 3.5.3), you could install R 3.5 from https://docs.rstudio.com/resources/install-r/, install all necessary packages, and have Shiny Server use the new R 3.5. Then uninstall the R from EPEL (R-core or R-core-devel) to fully switch over. From there, you could even create symlinks to R in /usr/bin instead of /usr/local/bin, and not worry about adding /usr/local/bin to the sudo PATH. | unknown | |
d14092 | val | If you expect the number of duplicate keys to be small, just keep incrementing the iterator until the key value changes. If you expect the number of duplicate keys to be large, just use upper_bound to get an iterator to the element with the next key value. | unknown | |
d14093 | val | I was constructing the JSON from PHP, something like:
$data = $autoQuery->fetch_array();
$autoData = array('CARS' => $data['CARS'],
'MOTORS' => $data['MOTORS'],
'BOATS' => $data['BOATS']);
echo json_encode($autoData);
This wasn't working. When I put intval() before each $data variable, it worked! | unknown | |
d14094 | val | As loulou8284 mentioned you can put it in your XML, or if it is fixed, define it with Color.rgb(), but to make your code running you need to get the reference to your Context as your class is not declared inside a context-class:
convertView.setBackgroundColor(getContext().getResources().getColor(R.color.purple));
A: Assuming you have a context instance somewhere in the adapter instead of this
convertView.setBackgroundColor(getResources().getColor(R.color.purple));
it should be this
convertView.setBackgroundColor((your context).getResources().getColor(R.color.purple));
and if you don't have a reference to the context just pass it in to the adapter constructor
A: You can declare the color in you .xml file ( in your item xml file )
A: Use setBackgroundResource() rather than setBackgroundColor()
setBackgroundResource() takes an integer resource index as parameter, and load whatever resource that index points to (for example; a drawable, a string or in your case a color).
setBackgroundColor(), however takes an integer representing a color. That is, not a color-resource, but a direct, hexadecimal, rgba value (0xAARRGGBB). | unknown | |
d14095 | val | For each index i find the previous index prev[i] that has the same value (or -1 if there's no such index). It may be done in O(n) average by going left to right with hash_map, then the answer for range [l;r) of indices is number of elements i in range [l;r) such that their value is less then l (it require some thinking but should be clear)
Now we will solve problem "given range [l;r) and value c find number of elements that are less then c" on array prev. It may be done in O(log^2) using segment tree, if we save in each vertex all the numbers that are in its range(subtree). (On each query we will get O(log n) vertices and do binary search in them) | unknown | |
d14096 | val | What you need to do in your resize-callback is the following:
var $carouselContainer = $('#caroufredsel');
var resizeCallback = function() {
var showThatManyItems = 3; // determine the number of items to be shown depending on viewport size
$carouselContainer.trigger('configuration', [
'items', {
visible : showThatManyItems
}
], true);
}
A: If I'm not mistaken, don't you need "responsive:true," as one of the parameters? | unknown | |
d14097 | val | adding 'fixed' class to the cover pages solves the problem | unknown | |
d14098 | val | How about this
$('#delete').click(function() {
var checked = $('.inbox_check:checked');
var ids = checked.map(function() {
return this.value; // why not store the message id in the value?
}).get().join(",");
if (ids) {
$.post(deleteUrl, {idsToDelete:ids}, function() {
checked.closest(".line").remove();
});
}
else {
alertBox('No messages selected.'); // this is a custom function
}
});
Edit: Just as a side comment, you don't need to be generating those incremental ids. You can eliminate a lot of that string parsing and leverage jQuery instead. First, store the message id in the value of the checkbox. Then, in any click handler for a given line:
var line = $(this).closest(".line"); // the current line
var isSelected = line.has(":checked"); // true if the checkbox is checked
var msgId = line.find(":checkbox").val(); // the message id
var starImg = line.find(".star_clicker img"); // the star image
A: Assuming each checkbox has a parent div or td:
function removeDatabaseEntry(reference_id)
{
var result = null;
var scriptUrl = './databaseDelete.php';
$.ajax({
url: scriptUrl,
type: 'post',
async: false,
data: {id: reference_id},
success: function(response)
{
result = response;
}
)};
return result;
}
$('.inbox_check').each(function(){
if ($(this).is(':checked')){
var row = $(this).parent().parent();
var id = row.attr('id');
if (id == null)
{
alert('My selector needs updating');
return false;
}
var debug = 'Deleting ' + id + ' now...';
if (console) console.log(debug);
else alert(debug);
row.remove();
var response = removeDatabaseEntry(id);
// Tell the user something happened
$('#response_div').html(response);
}
}); | unknown | |
d14099 | val | Figured it out. GiftedChat requires that you use one of its own methods called append.
const onSend = (msg) => {
console.log("msg : ", msg);
// first, make sure message is an object within an array
const message = [{
_id: msg[0]._id,
text: msg[0].text,
createdAt: new Date(),
user: {
_id: userId,
avatar: "https://randomuser.me/api/portraits/women/79.jpg",
name: "Jane"
}
}]
// then, use the GiftedChat method .append to add it to state
setMessages(previousArr => GiftedChat.append(previousArr, message));
} | unknown | |
d14100 | val | This is the Babel's (targeted only to IE 11) answer:
"use strict";
function _createForOfIteratorHelper(o, allowArrayLike) {
var it =
(typeof Symbol !== "undefined" && o[Symbol.iterator]) || o["@@iterator"];
if (!it) {
if (
Array.isArray(o) ||
(it = _unsupportedIterableToArray(o)) ||
(allowArrayLike && o && typeof o.length === "number")
) {
if (it) o = it;
var i = 0;
var F = function F() {};
return {
s: F,
n: function n() {
if (i >= o.length) return { done: true };
return { done: false, value: o[i++] };
},
e: function e(_e) {
throw _e;
},
f: F
};
}
throw new TypeError(
"Invalid attempt to iterate non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method."
);
}
var normalCompletion = true,
didErr = false,
err;
return {
s: function s() {
it = it.call(o);
},
n: function n() {
var step = it.next();
normalCompletion = step.done;
return step;
},
e: function e(_e2) {
didErr = true;
err = _e2;
},
f: function f() {
try {
if (!normalCompletion && it.return != null) it.return();
} finally {
if (didErr) throw err;
}
}
};
}
function _unsupportedIterableToArray(o, minLen) {
if (!o) return;
if (typeof o === "string") return _arrayLikeToArray(o, minLen);
var n = Object.prototype.toString.call(o).slice(8, -1);
if (n === "Object" && o.constructor) n = o.constructor.name;
if (n === "Map" || n === "Set") return Array.from(o);
if (n === "Arguments" || /^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(n))
return _arrayLikeToArray(o, minLen);
}
function _arrayLikeToArray(arr, len) {
if (len == null || len > arr.length) len = arr.length;
for (var i = 0, arr2 = new Array(len); i < len; i++) {
arr2[i] = arr[i];
}
return arr2;
}
var customSelect = document.getElementsByClassName("input-select");
Array.from(customSelect).forEach(function (element, index) {
element.addEventListener("click", function () {
Array.from(customSelect).forEach(function (element, index2) {
if (index2 !== index) {
element.classList.remove("open");
}
});
this.classList.add("open");
});
var _iterator = _createForOfIteratorHelper(
document.querySelectorAll(".select-option")
),
_step;
try {
for (_iterator.s(); !(_step = _iterator.n()).done; ) {
var option = _step.value;
option.addEventListener("click", function () {
if (!this.classList.contains("selected")) {
this.parentNode
.querySelector(".select-option.selected")
.classList.remove("selected");
this.classList.add("selected");
this.closest(".input-select").querySelector(
".input-select__trigger span"
).textContent = this.textContent;
}
});
} // click away listener for Select
} catch (err) {
_iterator.e(err);
} finally {
_iterator.f();
}
document.addEventListener("click", function (e) {
var isClickInside = element.contains(e.target);
if (!isClickInside) {
element.classList.remove("open");
}
return;
});
});
A: You can use Babel to transpile the code first. You can also refer to this tutorial about how to use Babel to transpile code.
The code I use Babel to transpile is like below:
'use strict';
var customSelect = document.getElementsByClassName('input-select');
Array.from(customSelect).forEach(function (element, index) {
element.addEventListener('click', function () {
Array.from(customSelect).forEach(function (element, index2) {
if (index2 !== index) {
element.classList.remove('open');
}
});
this.classList.add('open');
});
var _iteratorNormalCompletion = true;
var _didIteratorError = false;
var _iteratorError = undefined;
try {
for (var _iterator = document.querySelectorAll('.select-option')[Symbol.iterator](), _step; !(_iteratorNormalCompletion = (_step = _iterator.next()).done); _iteratorNormalCompletion = true) {
var option = _step.value;
option.addEventListener('click', function () {
if (!this.classList.contains('selected')) {
this.parentNode.querySelector('.select-option.selected').classList.remove('selected');
this.classList.add('selected');
this.closest('.input-select').querySelector('.input-select__trigger span').textContent = this.textContent;
}
});
}
// click away listener for Select
} catch (err) {
_didIteratorError = true;
_iteratorError = err;
} finally {
try {
if (!_iteratorNormalCompletion && _iterator.return) {
_iterator.return();
}
} finally {
if (_didIteratorError) {
throw _iteratorError;
}
}
}
document.addEventListener('click', function (e) {
var isClickInside = element.contains(e.target);
if (!isClickInside) {
element.classList.remove('open');
}
return;
});
});
Then add this line of code before the script to add a polyfill:
<script src="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.6.15/browser-polyfill.min.js"></script> | unknown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.