_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d14701 | val | UPDATE!!!
This issue is fixed in pgAdmin 4 Version 4.3!
Thank you pgAdmin Team!
Note: This is still an issue up through pgAdmin 4 version 4.2
Updated: Feb 19, 2109
:(
/*
Issue:
(Tested on Windows Server 2012 R2, Chrome and Firefox, pgAdmin 4 3.2)
Using nested functions in a variable assignment, or just in a SQL statement
causes multiple tabs to be added when hitting enter for a new line anywhere
later in your code.
If you uncomment the first line with nested functions (below), all
carriage returns lower in the code create new lines with
many unwanted tabs.
Uncomment the line below and hit enter at the end of the line,
or before another line of code.
*/
/*
x := upper(substr('uncomment to test this. Hit enter after the semicolon.', 13));
*/
/*
My workaround is to unnest the functions and use multiple statements.
Note: Be sure the offending line above is commented out.
*/
x := substr('uncomment to test this. Hit enter after after the semicolon.', 13);
x := upper(x);
A: Have tried your suggestion and it does work. But it does seem odd that we have to comment out the entire offending line (i.e. with the nested text) to make this work. I haven't had this issue with other editors. For example, entering the same text in SQL Developer as follows:
SELECT *
FROM employees
WHERE deptno IN (SELECT deptno FROM departments
WHERE loc = 'CHICAGO');
Pressing enter will place the cursor under the 2nd WHERE (same as Postgres). I clear the tabs with Shift+Tab to column 1, and going forward I am fine. Each new line, cursor is at the beginning. This doesn't work with Postgres.
I am still new to a lot of this. Thank you for sharing. | unknown | |
d14702 | val | room1_chestChoice is defined only if the previous choice, room1_choice1, was "search the room". Checking room1_chestChoice makes sense only in that case. Change your indentation to reflect your decision tree:
elif room1_choice1=='Search the room':
print('You have chosen to search the room')
...
room1_chestChoice=input(':')
if room1_chestChoice=='Attempt to open':
print('You heave at the lid
...
room1_chestChoice=input(':')
That second if has to be entirely within the code dependent on the elif. | unknown | |
d14703 | val | Looks like the original suggestion above is correct. I'm seeing slider javascript includes at the top of your homepage that aren't on the other pages.
Generally, a good way of troubleshooting is to make copies of both pages, index-c.php and about-c.php perhaps, and start removing everything that isn't pertinent to the trouble you're having (other HTML, css includes, etc.) until you get down to only the slider on the page. Once you've done that, you might notice that the one page is slightly different than the other, making it work. You can copy back and forth until it does.
The other possibility is that there's a relative path problem somewhere, because your one page is inside a folder (though I'm guessing you have a .htaccess redirect to a root folder page)? So if all else fails, move the reduced about-c.php to the root folder and see if that then works. If so, you know it's a path problem.
Hope these suggestions help.
A: I see that jQuery is being included on all your pages but the cycle plugin is only included on the home page. You should be able to update your template(s) to fix this. | unknown | |
d14704 | val | // search is hidden in TableView by default
For swift
self.TableView.setContentOffset(CGPointMake(0,self.searchController.searchBar.frame.size.height), animated: false)
For Objective c
[[self staffTableView] setContentOffset:[CGPointMake(0, frameHeight)] animated:YES];
When you pullDown it will reveal you searchbar(added in the headerview)
Hope this will help you to solve your problem. it was working for me.
A: Swift 3:
let point = CGPoint(x: 0, y:(self.navigationController?.navigationBar.frame.size.height)!)
self.tableView.setContentOffset(point, animated: true)
This is for regular navigation bar. For search controller:
Replace navigationController and navigationBar with karthik's y coordinate. | unknown | |
d14705 | val | I should have guessed that every time you said
regular ajax syntax from the controller
you meant you were using jQuery $.ajax previously.
The difference is that the property that sets the request method in Angular's $http is method, not type.
You can simplify it further using
return $http.delete(baseURI + 'services/v6.0/agent-sessions/' + sessionId, {
headers: {
Authorization: 'Bearer ' + access_token,
},
data: endAgentSessionPayload
}); | unknown | |
d14706 | val | I suggest using one of the scientific python distributions, see scipy.org, Install section. I use Anaconda.
If you use the IPython notebook installed from one of the scientific python distributions, then you are ready to use it right away.
You get many useful packages, specifically, pandas package.
You do
import pandas as pd
then
data = pd.read_excel(filename)
you get a data frame, with all the data.
You can set the data frame column names by supplying a list of names with the keyword argument 'names' in the above function.
See here: pd.read_excel | unknown | |
d14707 | val | Indeed lambda have unique type and so you have to instantiate infinite recursion.
One way to solve that is to give an unique type, either a custom functor, or a type-erased type as std::function
template<class PrimeIter>
void iter_feasable_primes(
const PrimeIter& candidate_primes,
uint32_t larger_prime,
uint8_t index,
std::function<void(std::vector<uint32_t>)> cb)
{
std::vector<uint32_t> next_candidate_primes;
for (const uint32_t p : candidate_primes) {
// Updates next_candidate_primes
}
if (index == 0) {
// No valid tuples of primes were found
return;
}
for (const uint32_t p : next_candidate_primes) {
iter_feasable_primes(next_candidate_primes, p, index - 1, [&](std::vector<uint32_t> smaller_primes) {
smaller_primes.push_back(p);
cb(smaller_primes);
});
}
} | unknown | |
d14708 | val | you can use the ajax.actionlink
@Ajax.ActionLink("Delete", "DeleteUser", new { id = user.user_id }, new AjaxOptions { Confirm = "Are You sure to delete?", UpdateTargetId = "article_1" }) | unknown | |
d14709 | val | Use the Webpack rule for pug files with these loaders:
...
{
test: /\.pug$/,
use: [
{
loader: 'html-loader'
},
{
loader: 'pug-html-loader'
}
],
},
...
And maybe you can rid of !!pug-loader! for the plugins template property:
...
new HtmlWebpackPlugin({
template: './src/pug/index.pug',
filename: path.join(__dirname, 'dist/index.html')
})
...
Probably you have to install the loaders via npm:
npm i html-loader pug-html-loader | unknown | |
d14710 | val | There is a very simple and fast way, that does not even require you to manually write url rewriting rules.
In the Error Pages module, IIS has defined the same error to display different pages according to different languages.
You can see that it has checked "Try to return the error file in the client languag". And if you enter the file path.
Error pages of different languages already exist in different language folders.
So if you want to response custom page, just change these pages or create new error pages store into different folder and change the file name in file path. | unknown | |
d14711 | val | *
*Is googlePlusUserId identify the person that has a sequence of channels? In other words can we use googlePlusUserId to group channels
that belong to a YouTube user?
Nop, the googlePlusId identify the GG+ of the channel. It can be a Google+ page or Google+ identity.
Each channel have a differents Google+ id.
You can read more about YouTube and Google+ id | unknown | |
d14712 | val | Try to use hard-coded, for a test, translation.activate(language) just before sending it:
def nomegiorno(self):
old = translation.get_language()
print('old', old)
translation.activate('it')
pio = self.datainserimento.strftime("%A")
translation.activate(old)
return pio
And tell me what you have, this should help anyway | unknown | |
d14713 | val | It is also possible to add some code snippet in documentation like the following
by adding the following code
* @code
* - (UIButton*) createButtonWithRect:(CGRect)rect
{
// Write your code here
}
* @endcode
For more details of documenting methods of a custom class you can have a look on my blog Function Documentation in Xcode.
A: To get Xcode to show documentation for your classes, you must create a documentation set for your classes using a tool like Doxygen or HeaderDoc. After creating the documentation set, you must install it using Xcode's documentation preferences. Apple has an article on using Doxygen, but it covers Xcode 3, not 4.
Using Doxygen to Create Xcode Documentation Sets
A: As of Xcode 5.0, Doxygen and HeaderDoc formatting for variables and methods is automatically parsed and rendered in the Quick Help popover. More information about it here, but here's some key bits:
/**
* Add a data point to the data source.
* (Removes the oldest data point if the data source contains kMaxDataPoints objects.)
*
* @param aDataPoint An instance of ABCDataPoint.
* @return The oldest data point, if any.
*/
- (ABCDataPoint *)addDataToDataSource:(ABCDataPoint *)aDataPoint;
renders in Xcode as:
As for properties, it's as easy as:
/// Base64-encoded data.
@property (nonatomic, strong) NSData *data;
When option-clicked, this lovely popover appears:
A: Well, it seems that for the classes the question hasn't still been answered, so I'll post my suggestions.
Just before the @interface MyClass : NSObject line in the MyClass.h file you use the comment like you did in your example, but adding some tags to display the text. For example the code below:
/**
* @class GenericCell
* @author Average Joe
* @date 25/11/13
*
* @version 1.0
*
* @discussion This class is used for the generic lists
*/
will produce the following output:
the output of the code above http://imageshack.com/a/img18/2194/kdi0.png
A: Appledoc is the best option for generating xcode documentation at the moment. Doxygen is great, but it does not generate docsets that work very well for the popups you're talking about. Appledoc isn't perfect, but we moved over to it and have been really happy with the results. | unknown | |
d14714 | val | This image suggests:
# Use syslog-ng to get Postfix logs (rsyslog uses upstart which does not seem
# to run within Docker).
run apt-get install -q -y syslog-ng
expose 25
cmd ["sh", "-c", "service syslog-ng start ; service postfix start ; tail -F /var/log/mail.log"]
That might be easier in order to produce and see those logs (as output of the main thread)
Also, satnhak suggests in the comments (in 2021):
You probably also need this: syslog-ng/syslog-ng issue 2407
That refers to syslog-ng/syslog-ng PR 2408: "system-source: check if /proc/kmsg can be opened".
Or you would need in your configuration:
source s_local {
system(
exclude-kmsg(yes)
);
internal();
};
satnhak adds:
adding this to the Dockerfile does the trick:
RUN sed -i 's/system()/system(exclude-kmsg(yes))/g' \
/etc/syslog-ng/syslog-ng.conf
A: *
*Install rsyslog
apt-get install ryslog
*Start rsyslog
service rsyslog start
*Restart postfix
service postfix restart
*You will find /var/log/mail.log | unknown | |
d14715 | val | Maybe try Range("A1").NumberFormat
Or, Range("D2").Value = Val(Range("C2").Value) The Val() function. | unknown | |
d14716 | val | Problem: Code is reading '\n' into buffer[] yet trying to that as part of the command. Need to trim the buffer. *** below
// Insure file is open
fichier=fopen("ethernet_dns.txt","r");
assert(fichier);
// Use fgets
//memset(&buffer,0,sizeof(buffer));
//fread(buffer,20,1,fichier);
if (fgets(buffer, sizeof buffer, fichier)) {
// lop off potential \n
buffer[strcspn(buffer, "\n")] = '\0'; // ***
printf("buffer is: <%s>\n",buffer);
int n = snprintf(command, sizeof(command), "ping -c 1 -W 1 %s > /tmp/ping_result",
buffer);
printf("command is: <%s>\n",command);
// Only issue command if no problems occurred in snprintf()
if (n > 0 && n < sizeof(command)) system(command);
A: the posted code has a couple of problems
1) outputs results of ping to stdout rather than to /tmp/ping_result
2) fails to removed trailing newline from the buffer[] array
The following code
1) cleans up the indenting
2) corrects the problems in the code
3) handles possible failure of call to fopen()
4) eliminates unneeded final statement: return 0
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main( void )
{
FILE *fichier;
char buffer[20];
char command[200];
system(" cat /etc/resolv.conf | grep nameserver | awk -F' ' '{print $2}' | cut -d'.' -f1-3 | awk '{print $1\".1\"}' > ethernet_dns.txt");
fichier=fopen("ethernet_dns.txt","r");
if( !fichier )
{
perror( "fopen for ethernet_dns.txt failed");
exit( EXIT_FAILURE );
}
// implied else, fopen successful
memset(buffer,0,sizeof(buffer));
size_t len = fread(buffer,1, sizeof(buffer),fichier);
printf( "len is: %lu\n", len );
buffer[len-1] = '\0'; // eliminates trailing newline
printf("buffer is: %s\n",buffer);
snprintf(command,sizeof(command),"ping -c 1 -W 1 ");
strcat( command, buffer);
strcat( command, " > /tmp/ping_result");
printf("command is: %s\n",command);
system(command);
}
the resulting output, on my computer, is in file: /tmp/ping_result
PING 127.0.1.1 (127.0.1.1) 56(84) bytes of data.
64 bytes from 127.0.1.1: icmp_seq=1 ttl=64 time=0.046 ms
--- 127.0.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms | unknown | |
d14717 | val | I am writing a compiler that produces JVM code. I need line numbers in the output. I do it this way.
I build up a list of objects similar to this:
public class MyLineNum {
public final short pc;
public final short lineNum;
}
Then I add the line number table:
final ClassFile classFile = ...;
final ConstPool constPool = classFile.getConstPool();
final MethodInfo minfo = new MethodInfo( ... );
final Bytecode code = new Bytecode( constPool );
... code that writes to 'code'
final List<MyLineNum> lineNums = new ArrayList<>();
... code that adds to 'lineNums'
final CodeAttribute codeAttr = code.toCodeAttribute();
if ( !lineNums.isEmpty() ) {
// JVM spec describes method line number table thus:
// u2 line_number_table_length;
// { u2 start_pc;
// u2 line_number;
// } line_number_table[ line_number_table_length ];
final int numLineNums = lineNums.size();
final byte[] lineNumTbl = new byte[ ( numLineNums * 4 ) + 2 ];
// Write line_number_table_length.
int byteIx = 0;
ByteArray.write16bit( numLineNums, lineNumTbl, byteIx );
byteIx += 2;
// Write the individual line number entries.
for ( final MyLineNum ln : lineNums) {
// start_pc
ByteArray.write16bit( ln.pc, lineNumTbl, byteIx );
byteIx += 2;
// line_number
ByteArray.write16bit( ln.lineNum, lineNumTbl, byteIx );
byteIx += 2;
}
// Add the line number table to the CodeAttribute.
@SuppressWarnings("unchecked")
final List<AttributeInfo> codeAttrAttrs = codeAttr.getAttributes();
codeAttrAttrs.removeIf( ( ai ) -> ai.getName().equals( "LineNumberTable" ) ); // remove if already present
codeAttrAttrs.add( new AttributeInfo( constPool, "LineNumberTable", lineNumTbl ) );
}
// Attach the CodeAttribute to the MethodInfo.
minfo.setCodeAttribute( codeAttr );
// Attach the MethodInfo to the ClassFile.
try {
classFile.addMethod( minfo );
}
catch ( final DuplicateMemberException ex ) {
throw new AssertionError( "Caught " + ex, ex );
} | unknown | |
d14718 | val | Once a user is signed out from Firebase's anonymous authentication provider, there is no way to reclaim that UID through that provider. Given that a user doesn't have to provide any credentials to sign-in anonymously, allowing them to claim a specific UID would be a big security risk.
The only option would be to build your own provider for Firebase Authentication and give the user the same UID as before there, after you've verified that they are the same user. | unknown | |
d14719 | val | what error do you get? You find this information in the details or in the Error Log view. Just a guess: Is the update site of Eclipse Juno activated? Have a look at "Window -> Preferences -> Install/Update -> Available Software Sites". There should be an active entry pointing to "http://download.eclipse.org/releases/juno".
Alternatively you could install the UML Lab Standalone RCP (http://www.uml-lab.com/download/).
Best regards
Manuel | unknown | |
d14720 | val | After speaking to a different colleague, they pointed out that the version which works uses Ansible 2.5.5 and I was trying with 2.5.1 also the boto python libraries need to be using the correct version. | unknown | |
d14721 | val | After a lot of digging, I was able to answer my own question:
SubfieldBase has been deprecated, and will be removed in Django 1.10; which is why I left it out of the implementation above. However, it seems that what it does is still important. Adding the following method to replaces the functionality that SubfieldBase would have added.
def contribute_to_class(self, cls, name, **kwargs):
super(EnumField, self).contribute_to_class(cls, name, **kwargs)
setattr(cls, self.name, Creator(self))
The Creator descriptor is what calls to_python on attributes. If this didn't happen, querys on models would result in the EnumField fields in the model instances being simply strings, instead of Enum instances like I wanted. | unknown | |
d14722 | val | Your loop end when getchar() return EOF so you never go in the else if at the end.
Example:
#include <stdio.h>
#include <stdbool.h>
int main(void) {
printf("Histogram\n");
size_t len = 0;
bool running = true;
while (running) {
switch (getchar()) {
case EOF:
running = false;
case ' ':
case '\n':
case '\t':
case '\r':
if (len != 0) {
printf("\n");
len = 0;
}
break;
default:
printf("[]");
len++;
}
}
}
A: Move the tests around:
while (true)
{
const int c = getchar();
if (c != ' ' && c != '\n' && c != '\t' && c != '\r' && c != EOF)
{
state = IN;
len++;
}
else if (state == IN)
{
// ...
}
if (c == EOF) break;
} | unknown | |
d14723 | val | Please use Application Gateway V1. I have seen this issue where the server sends negotiate and NTLM and with AppGW V2 the auth fallsback to NTLM where it promts for login for each and every request(CSS file loading).
A: NTLM / Kerberos is not supported on V2 gateways. No idea why.
https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-faq#does-application-gateway-v2-support-proxying-requests-with-ntlm-authentication | unknown | |
d14724 | val | Instead of using a custom field unique_id, you should use the field provided for the event: id
eventClick: function(event, element) {
console.log(event);
console.log(event.id);
if(confirm('Voulez-vous supprimer cette dispo?')) {
$('#calendar').fullCalendar('removeEvents', event.id);
}
},
events: [{
id: 1,
title: 'Some Event',
start: new Date(y, m, d + 1, 19, 0),
end: new Date(y, m, d + 1, 22, 30),
allDay: false
}, {
[...]
Here is an example that works: enter link description here | unknown | |
d14725 | val | After digging into the manual of optimize.curve_fit, I figured out this needs a boundary limit for "c" in func. Because this parameter depends on the T of the curve so it definitely needs a limit. So here we are with the new set of limits and the nice fitting curve, kudos to myself :-)
rangeX = 400
rangeY = 8000
param_bounds = ([-np.inf, 0, 0, -np.inf], [np.inf, np.inf, 0.02, np.inf]) | unknown | |
d14726 | val | Looks like I only needed :
{
text: 'My accordion Button',
className: "btn-sm",
action: function ( e, dt, node, config ) {
$('#accordion-modal').modal('show');
e.preventDefault();
}
} | unknown | |
d14727 | val | @SmokeyPHP is right, you can do this with JS. See this SO question.
function parseDate(input) {
var parts = input.split('-');
// new Date(year, month [, day [, hours[, minutes[, seconds[, ms]]]]])
return new Date(parts[1], parts[0]-1); // Note: months are 0-based
}
> parseDate("05-2012")
Tue May 01 2012 00:00:00 GMT-0600 (MDT)
And you have the compare part correct.
> d1 = parseDate("05-2012")
Tue May 01 2012 00:00:00 GMT-0600 (MDT)
> d2 = parseDate("06-2012")
Fri Jun 01 2012 00:00:00 GMT-0600 (MDT)
> d1 < d2
true
If you do a lot with dates in JS then moment js is worth looking at. Specifically in this case it has a parse method which can take a format string. | unknown | |
d14728 | val | You can use replace here with regex as:
\((.*?)\)/g
Second argument to replace is a replacer function.
replace(regexp, replacerFunction)
A function to be invoked to create the new substring to be used to
replace the matches to the given regexp or substr.
How arguments are passed to replacer function
let text = 'Hello there (i)Sir(i)';
let italics = text.replace(/\((.*?)\)/g, (_, match) => `<${match}>`);
console.log(italics);
let text2 = 'Hello there (test)Sir(test)';
let italics2 = text2.replace(/\((.*?)\)/g, (_, match) => `<${match}>`);
console.log(italics2);
A: Use the JavaScript "replaceAll" function:
let text = "Hello there (i)Sir(i)";
console.log(text.replaceAll("(i)", "<i>")); | unknown | |
d14729 | val | Its think it is exactly as you suggest:
projectionList.add(Projections.property("player.level"), "player.level");
A: First create alias for the JoinColumn table Player in User & then refer it in your projectionList.
Criteria criteria = session.createCriteria(User.class);
criteria.createAlias("player", "p");
projectionList.add(Projections.property("p.level"), "player");
...
criteria.setProjection(projectionList);
criteria.setResultTransformer(Transformers.aliasToBean(User.class));
I hope this helps you. | unknown | |
d14730 | val | In your php you would create an array of the values from DB in your while loop. Then output this array to a javascript variable using json_encode
$arr=array();
while($row = $sql->fetch(PDO::FETCH_ASSOC)){
$arr[]=array( $row['lat'], $row['lng']);
}
echo `<script>var markerData='. json_encode( $arr).';</script>';
This will create a JavaScript array in page you would loop over to create markers:
<script>
var markerData=[ [100,40], [99,37]/* etc*/];
</script>
You could also retirve the JSOn data using AJAX depending on your needs | unknown | |
d14731 | val | The first try we have implemented is a bash script. The biggest problem is uninstalling the old version. Therefore we have set up a convention for the names of the feature and it's subfeatures. So we can use the following to find already installed features:
features=$(echo "feature:list" | ssh -p $smx_ssh_port $smx_user@$smx_host | grep -h "<feature-name-convention-regex>.*|.*x.*|" | cut -f1 -d" " | tr '\n' ' ')
This can then be passed to feature:uninstall and can also be used for detecting if features were installed after the call to feature:repo-add -i.
The remaining problem is that we are unable to reference 3rd-party subfeatures because they won't be uninstalled when an updated version needs to be installed and we can't be sure if all of the subfeatures have been successfully installed.
A: For karaf 3 there is no good way to update features.
This is already a little better for karaf 4. It allows to update a feature repo and you can then simply install the feature again. It will detect that the feature has changed and do the necessary changes in bundles. | unknown | |
d14732 | val | Consider using Helicon Ape mod_headers module. But note that it works only for IIS7 and higher.
A: Are you saying you don't have access to administer the server? If you can access the IIS management console, you can set these headers directly through the UI:
http://www.webpaths.com/archives/software/microsoft/iis/2009/02/05/how-to-add-an-expires-header-in-iis.html | unknown | |
d14733 | val | Try adding marginBottom to safeAreaView and View :
<SafeAreaView style={{backgroundColor: '#f3f3f5' ,marginBottom:100}}>
<View
style={{
borderWidth: 3,
borderColor: 'yellow',
width: width,
height: height,
flexDirection: 'column-reverse',
marginBottom:100
}}>
try adjusting your marginBottom to check
Hope it helps. feel free for doubts | unknown | |
d14734 | val | Found it!!!
As it is used on Vertical Barchart,
https://plotly.com/javascript/bar-charts/
It can be done on horizontal, also.
What i've done is:
name.replace(" ", "<br>")
HorizontalBar Better | unknown | |
d14735 | val | There appears to be a an API called services.list. This API returns the services as a JSON array of service objects. Each object instance includes a field called serviceId which appears to the identifier of the service (eg. DA34-426B-A397) and I believe that this is what you are looking for. | unknown | |
d14736 | val | It's unclear what exactly the sorting should be, but you should be aware of several layers of ordering that you can implement with ThenBy, as such:
string[] data = new string[] {"01-001-A-02", "01-001-A-01", "01-001-B-01", "01-002-A-01", "01-003-A-01"};
var sorted = data.OrderBy(x => x).ThenBy(x=> x.Split('-')[3]);
A: You can order by each part of string separately (splitted by -) using string.Split method:
string[] strArr = { "01-001-A-02", "01-001-A-01", "01-001-B-01", "01-002-A-01", "01-003-A-01", };
strArr = strArr
.Select(s => new { Str = s, Splitted = s.Split('-') })
.OrderBy(i => i.Splitted[0])
.ThenBy(i => i.Splitted[1])
.ThenBy(i => i.Splitted[2])
.ThenBy(i => i.Splitted[3])
.Select(i => i.Str).ToArray();
Note that this requires each element to have four parts (separated by -). Otherwise, it will throw t. | unknown | |
d14737 | val | You have not set
worker.DoWork += new DoWorkEventHandler(worker_DoWork);
before calling worker.RunAsync()
A: You are never wiring up the event.
public Form1()
{
InitializeComponent();
worker.DoWork += new DoWorkEventHandler(worker_DoWork);
}
A: RunWorkerAsync() starts the worker thread and immediately returns thus the debugger seems to "step through it". Set a breakpoint in the worker_DoWork() method.
A: If u still have event and is not working, try the following
Just call
System.Windows.Forms.Application.DoEvents();
before calling RunWorkerAsync() | unknown | |
d14738 | val | In the meantime new releases with a lot of bug fixes have been issued, so your problem should be no longer existent.
In addition we added a configuration manager 'light' into the IoT broker, which enables you to use it standalone.
P.S. I became aware of this help request only recently. I am very sorry that we therefore could not provide you an answer in due time. | unknown | |
d14739 | val | Note, that you have circular dependency between App and App2. Typescript is unable to infer return type of App2#render as it uses App in it's return expression which in turn uses App2 which isn't yet fully defined ...
Long story short - declare your render methods as follows:
public render(): JSX.Element {
// ...
}
Thanks to this Typescript compiler knows render signature without looking at function contents. | unknown | |
d14740 | val | CREATE TABLE sport_club_members (
id INT NOT NULL PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(100),
address VARCHAR(100),
payment_due_at TIMESTAMP DEFAULT NOW()
);
This will build you a table.
A: Very basic would be a two table approach.
One Table for your members in pseudo code (make your own thoughts ;-) not copy&paste them)
ID INT auto_increment PrimaryKey
name VARCHAR(100) //maybe store as split name
address TEXT
and another for the received payments
ID INT auto_increment PK
UID INT ForeignKey to members table
payment_date DATE
amount DOUBLE
Then extract the missing payments for a month via any language/program.
This can be expanded with more detail anytime
A: You should just use phpmyadmin to create your tables. You will need 2 tables:
1. Table club_members with the following fields (member_id, name, address).
2. Table monthly_payments with the following fields (member_id, year, month, date_paid, amount).
Then member_id in table monthly_payments will be a foreign key referencing member_id in table club_members | unknown | |
d14741 | val | It's a little bit hidden behind a menu (see :help NERDTreeMenu), but as an upside it is extensible. It is launched (for the current file node) with the m key by default.
The script comes with two default menu plugins: exec_menuitem.vim and
fs_menu.vim. fs_menu.vim adds some basic filesystem operations to the menu for
creating/deleting/moving/copying files and dirs. exec_menuitem.vim provides a
menu item to execute executable files. | unknown | |
d14742 | val | As alternative, you can wrap your single methods in separate Predicate and then build a common predicate with required operations. This final Predicate hides all if...else.
private final Predicate<Void> predicate;
{
Predicate<Void> doAction = in -> doAction();
Predicate<Void> checkNumber = in -> checkNumber();
Predicate<Void> canMove = in -> canMove();
Predicate<Void> avoidAttackersWhileHeal = in -> avoidAttackersWhileHeal();
Predicate<Void> doCast = in -> doCast();
Predicate<Void> doAttack = in -> doAttack();
predicate = doAction.or(checkNumber).or(canMove).or(avoidAttackersWhileHeal).or(doCast).or(doAttack);
}
public void doIt() {
predicate.test(null);
} | unknown | |
d14743 | val | You're right, same question as AS400 SQL query with Parameter, which contains the solution.
A: Just a note: Host Integration Server 2006 supports named parameters. | unknown | |
d14744 | val | Locale ee has a minimum grouping digits set to 3 as seen on the CLDR survey tool.
You get the grouping separator if it has at least 3 digits on the left side of the first grouping separator.. This is a rare thing and ee is the only locale as of CLDR 38 which a such value. This applies to both Chrome and Firefox.
I have something to solve this. Taking advantage of formatting to parts.
Grouping separator is recieved by formatting it by million, 4 digit on the left side of the first grouping separator, because 4 is the highest value I can find for the minimum grouping digits, then it groups the non-grouped integer using that symbol every 3 digits.
function format_no_minimum_grouping_digits(value_to_format, locale, options) {
// Create a number formatter
const formatter = new Intl.NumberFormat(locale, options);
const formatter_options = formatter.resolvedOptions();
// Check if grouping is disabled
if (!formatter_options.useGrouping
// The POSIX locale have grouping disabled
|| formatter_options.locale === "en-US-u-va-posix"
// Bulgarian currency have grouping disabled
|| (new Intl.Locale(formatter_options.locale).language === "bg") && formatter_options.style === "currency") {
// If yes format as normal
return formatter.format(value_to_format);
};
// Otherwise format it to parts
const parts = formatter.formatToParts(value_to_format);
// Check if the grouping separator isn't applied
const groupSym = parts.find(part => part.type === "group") === undefined ? new Intl.NumberFormat(locale, options).formatToParts(10 ** 6)[1].value : undefined;
// If the grouping separator isn't applied, group them
return parts.map(({type, value}) => (type === "integer" && groupSym) ? value.replace(/\B(?=(\d{3})+$)/g, groupSym) : value).join('');
} | unknown | |
d14745 | val | I had the same issue when I stashed a new Model and unfortunately the selected solution did not work for me. What worked for me was: find the model file in your project folder in Finder ("ModelNameHere.xcdatamodeld"); right-button click and select "Show Package Contents". You will see all versions of the Model - delete the one that was not supposed to exist.
A: Check Compile Sources under Build Phases for your Target setting described in the below image.
I saw all the resources carefully and found one resource without any path (an unknown resource, I don't know how it appeared there..). Remove it from there, clean the product and run.
The above was the only reason in my case..
Hope it helps you !!! | unknown | |
d14746 | val | glxinfo is your friend. It's a command line tool which will report the version numbers and extensions supported for server side GLX, client side GLX, and OpenGL itself.
Do you have the NVIDIA binary (proprietary) driver installed? You'll need it if you want to take advantage of OpenGL versions 3 or 4. Like every software product there are occasional glitches, but over the years I think most 3D programmers / users would agree that the NVIDIA drivers for Linux have been very solid, much better than the alternatives. | unknown | |
d14747 | val | EF can't parse Convert.ToDateTime to SQL. Instead of that, you can declare DateTime variable outside of the query.
DateTime dt = Convert.ToDateTime(tempDate);
var list = db.Table().Where(n => n.GameDate == dt).ToList();
Also, you may need to compare only Date() part of the DateTime. Then you need to use on of canonical functions like EntityFunctions.TruncateTime():
DateTime dt = Convert.ToDateTime(tempDate);
var list = db.Table().Where(n => EntityFunctions.TruncateTime(n.GameDate) == dt.Date).ToList(); | unknown | |
d14748 | val | JetBrains Support confirmed to be that this is a bug within CLion that they managed to reproduce on their system.
Hoping for it to be fixed in a future version :) | unknown | |
d14749 | val | Why not just copy the word file to the local drive, then close the share and then open the file.
Though I'd suggest that it might be worth looking at some other kind of access control. This seems fairly unsafe (for example, what if your process crashes after the first command but before the second, then the share would be left open for anyone to use).
A: This seems like a really bad way to do it. There are a number of problems with it:
*
*As you discovered, Word keeps the file open for as long as you're using the file, so you can't just pull the share out from under it.
*You cannot reliably know when Word has finished with the file (for example, you can't just wait for the process to exit, because it might just be a wrapper that starts up for a second, notify the "original" word process to open the file and exits)
*Even if you could reliably detect when word had finished using the file, the fact that you had the share open the whole time seems to contradict your requirement to lock it down in the first place
*I assume you don't have the password hard-coded like that, but if you do then it's trivial for someone to just open up your executable and find out the password.
Perhaps if you tell us why you're trying to do this, we can suggest a better way, but if all you want to do is provide read-only access to the file (which seems to be the case) then you can just grant read-only access on the share itself: no need for this complicated process at all! | unknown | |
d14750 | val | You can use a query like so:
SELECT subjectcode.Year1, subjectcode.Year2,
subjectcode.Subjectcode, subjectcode.Subjectname,
subjectcode.Theory_Practical, q.fee
FROM subjectcode
INNER JOIN (
SELECT fees.Year1, fees.Year2, "Theory" As FeeType,
fees.Theoryfee As Fee
FROM fees
UNION ALL
SELECT fees.Year1, fees.Year2, "Practical" As FeeType,
fees.Practicalfee As Fee
FROM fees) AS q
ON (subjectcode.Theory_Practical = q.FeeType)
AND (subjectcode.Year2 = q.Year2)
AND (subjectcode.Year1 = q.Year1)
However, you would be much better off redesigning your fees table to match the data returned by the inner sql, that is, a different line for theory and practical fees:
Year1 Year2 FeeType Fee
2001 2003 Theory 440
2001 2003 Practical 320 | unknown | |
d14751 | val | (I think naomik's is the better approach, but if you are trying to just figure out what's happening, see the following:)
Apparently Image does not allow extension (at least in Firefox where I am testing) because it does work with another class.
If, after this line...
Image.prototype = Object.create(Property.prototype)
...you add:
alert(Image.prototype.display)
...you will see that it is undefined, whereas if you add:
function MyImage () {}
MyImage.prototype = Object.create(Property.prototype)
alert(MyImage.prototype.display); // Alerts the "display" function
I guess that is because you cannot replace the whole prototype. Try adding to the prototype instead (though this won't help with the constructor). | unknown | |
d14752 | val | For some reason, changing from <%= yeoman.dist %> to <%= config.dist %> solves the problem from me. Not sure when using yeoman.dist syntax is appropriate, but in any case I solved my own problem. So the solutions is...
clean: {
dist: {
files: [{
dot: true,
src: [
'.tmp',
'<%= config.dist %>/*',
'!<%= config.dist %>/.git*',
'!<%= config.dist %>/Procfile',
'!<%= config.dist %>/package.json',
'!<%= config.dist %>/web.js',
'!<%= config.dist %>/node_modules'
]
}]
},
server: '.tmp'
}, | unknown | |
d14753 | val | You have a syntax error in your code. response => response.json() { } is invalid JS.
To log the response:
fetch('https://www.data.com')
.then(response => console.log(response))
To convert the response to JSON before logging it:
fetch('https://www.data.com')
.then(response => response.json())
.then(json => console.log(json)) | unknown | |
d14754 | val | Sessions are domain and context dependent. If the both servlets are running on a different context (different webapp), then you need to configure the servletcontainer to allow session sharing among contexts. In Tomcat and clones you can do this by setting emptySessionPath attribtue to true.
If those servlets are actually running in the same context, then the problem lies somewhere else. It's hard to nail it down based on information given as far. Maybe HttpSession#invalidate() was been called or the client has sent an invalid jsessionid cookie with the request. | unknown | |
d14755 | val | The way to do this is to create a tabless tab view and a custom segmented control whose action changes tabs inyour tab view.
You can fill a duplicate bug report at bugreport.apple.com:
*
*rdar://34206798 NSTabViewController.h documentation is outdated
*rdar://34206839 NSTabViewController should provide a way to customise
its NSSegmentedControl | unknown | |
d14756 | val | Fruits[0]+'Prop'
Adding a string to a string returns a string.
To accomplish what you need, you can create an object with fruits as keys and props as values:
const fruitVsProps = {
Apples: ApplesProp
// add more as you like
}
<App prop={fruitVsProps[Fruits[0]]} />
A: there is two way you can Do it.
1.better way use array.map() then pass to your props.
let Fruits = ["Apples", "Pears", "Oranges"];
<ff> {Fruits.map((item)=>{<App props={item} /> })} </ff>
2.second way you can use literal template
A: Create an object and add your object in that
const props = {
ApplesProp: { Name: "Green", Age: 34 }
}
const Fruits = ["Apples", "Pears", "Oranges"]
console.log(props[Fruits[0]+'Prop'])
and now you could use it like
<App prop={props[Fruits[0]+'Prop']} />
A: To accomplish the above task you need to create an object with fruits as keys and props as values
const AppleProps = {Name : "Green }
const Fruits = {"Apples", "Banana"}
const FruitsAndProps = {
Apples: AppleProps
}
A: You need to consider passing your dynamic string inside backticks => `` instead of quotes and use eval() method to convert your string to variable (so that IDE knows you are refering to a variable not a string). The result should look like this:
const ApplesProp = { Name: "Green", Age: 34 }
const Fruits = ["Apples", "Pears", "Oranges"]
console.log(eval(`${Fruits[0]}Prop`))
//the console returns an object. so passing it through props should be fine.
<App prop={eval(`${Fruits[0]}Prop`)}/>
Although there are cleaner ways to solve this, since i don't want to mess with your logic i come up with the above code. | unknown | |
d14757 | val | First save path of your data on one of the following ways.
Either, hardcoded
filestoread <- paste0("../rawdata/", 1999:2017, "_table.html")
or reading all html files in the directory
filestoread <- list.files(path = "../rawdata/", pattern="\\.html$")
Then use lapply()
library(rvest)
lapply(filestoread, function(x) try(read_html(x)))
Note: try() runs the code even when there is a file missing (throwing error).
The second part of your question is a little broad, depends on the content of your files, and there are already some answers, you could consider e.g. this answer. In principle you use a combination of ?html_nodes and ?html_table. | unknown | |
d14758 | val | Although it is too late to answer, it could help others
I tried this solution.
Works very well for me!
$("input[type='radio'].myClass").click(function(){
var $self = $(this);
if ($self.attr('checkstate') == 'true')
{
$self.prop('checked', false);
$self.each( function() {
$self.attr('checkstate', 'false');
})
}
else
{
$self.prop('checked', true);
$self.attr('checkstate', 'true');
$("input[type='radio'].myClass:not(:checked)").attr('checkstate', 'false');
}
})
A: Simply replace disabled with checked:
$input.prop("checked", false);
or for this element:
this.checked = false;
However, if you are looking for form element which can be checked and unchecked, maybe input type="checkbox" /> is what you need.
A: $inputs.filter(':checked').prop('checked', false);
A: I think you need checkbox instead of radio button to uncheck the checked :
<input class="checkDisable" type="checkbox" value="Test1" />Test1
<input class="checkDisable" type="checkbox" value="Test2" />Test2
<input class="checkDisable" type="checkbox" value="Test3" />Test3
<input class="checkDisable" type="checkbox" value="Test4" />Test4
(function ($) {
$(document).ready(function () {
$('input:checkbox.checkDisable').change(function(){
var $inputs = $('input:checkbox.checkDisable')
if($(this).is(':checked')){
$inputs.not(this).prop('disabled',true);
}else{
$inputs.prop('disabled',false);
$(this).prop('checked',false);
}
})
});
})(jQuery);
A: The answer of Pablo Araya is good, but ...
$self.prop('checked', true);
is superfluous here. The radio button already has a checked state for every click. | unknown | |
d14759 | val | If you want one row per recommended_object_id, the one with the most recent timestamp, then use window functions:
select r.*
from (select r.recommended_object_id, ed.exhibitor_name, sd.event_edition_id, r.object_type,
row_number() over (partition by recommended_object_id order by r.timestamp desc) as seqnum
from recommendations r left join
show_details sd
on r.event_edition_id = sd.event_edition_id left join
exhibitor_details ed
on r.recommended_object_id = ed.exhibitor_id
) r
where seqnum = 1
order by r.recommended_object_id; | unknown | |
d14760 | val | The above methods for adding " into a string are correct. The issue with my OP is i was searching for a specific amount of white space before the tag. I removed the spaces and used the mentioned methods and it is now working properly. Thanks for the help!
A: string tblName = "<table name=" + '"' + "File" + '"' + ">";
should work since the plus sign concatenate
A: It should be either
string tblName = @" <table name=""File"">";
or
string tblName = " <table name=\"File\">";
No need for concatenation. Also what do you mean "it still doesn't work"? Just try Console.Write() and you'll see it ok. If you mean the backslashes are visible while inspecting in debugger then it's supposed to be that way
B | unknown | |
d14761 | val | This solution take a sightly changed object structure for the end indicator with a property isWord, because the original structure does not reflect entries like 'marc' and 'marcus', because if only 'marc' is uses, a zero at the end of the tree denotes the end of the word, but it does not allowes to add a substring, because the property is a primitive and not an object.
Basically this solution creates first a comlete tree with single letters and then joins all properties which have only one children object.
function join(tree) {
Object.keys(tree).forEach(key => {
var object = tree[key],
subKeys = Object.keys(object),
joinedKey = key,
found = false;
if (key === 'isWord') {
return;
}
while (subKeys.length === 1 && subKeys[0] !== 'isWord') {
joinedKey += subKeys[0];
object = object[subKeys[0]];
subKeys = Object.keys(object);
found = true;
}
if (found) {
delete tree[key];
tree[joinedKey] = object;
}
join(tree[joinedKey]);
});
}
var node = ["maria", "mary", "marks", "michael"],
tree = {};
node.forEach(string => [...string].reduce((t, c) => t[c] = t[c] || {}, tree).isWord = true);
console.log(tree);
join(tree);
console.log(tree);
.as-console-wrapper { max-height: 100% !important; top: 0; }
A recursive single pass approach with a function for inserting a word into a tree which updates the nodes.
It works by
*
*Checking the given string with all keys of the object and if string start with the actual key, then a recursive call with the part string and the nested part of the trie is made.
*Otherwise, it checks how many characters are the same from the key and the string.
Then it checks the counter and creates a new node with the common part and two nodes, the old node content and a new node for the string.
Because of the new node, the old node is not more necessary and gets deleted, as well as the iteration stops by returning true for the update check.
*If no update took place, a new property with string as key and zero as value is assigned.
function insertWord(tree, string) {
var keys = Object.keys(tree),
updated = keys.some(function (k) {
var i = 0;
if (string.startsWith(k)) {
insertWord(tree[k], string.slice(k.length));
return true;
}
while (k[i] === string[i] && i < k.length) {
i++;
}
if (i) {
tree[k.slice(0, i)] = { [k.slice(i)]: tree[k], [string.slice(i)]: 0 };
delete tree[k];
return true;
}
});
if (!updated) {
tree[string] = 0;
}
}
var words = ["maria", "mary", "marks", "michael"],
tree = {};
words.forEach(insertWord.bind(null, tree));
console.log(tree);
insertWord(tree, 'mara');
console.log(tree);
.as-console-wrapper { max-height: 100% !important; top: 0; } | unknown | |
d14762 | val | **I tested the code on Chrome and Firefox and the result was **
Always add the <!DOCTYPE> declaration to your HTML documents, so that the browser knows what type of document to expect.
.header {
grid-area: hd;
}
.footer {
grid-area: ft;
}
.content {
grid-area: main;
}
.sidebar {
grid-area: sd;
}
* {box-sizing: border-box;}
.wrapper {
border: 2px solid #f76707;
border-radius: 5px;
background-color: #fff4e6;
max-width: 940px;
margin: 0 auto;
}
.wrapper > div {
border: 2px solid #ffa94d;
border-radius: 5px;
background-color: #ffd8a8;
padding: 1em;
color: #d9480f;
}
.wrapper {
display: grid;
grid-auto-rows: minmax(auto, auto);
grid-template-areas:
"hd hd ft"
"sd . ft"
"sd main ft"
"sd . ft";
}
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<div class="wrapper">
<div class="header">Header</div>
<div class="sidebar">Sidebar</div>
<div class="content">Content</div>
<div class="footer">Footer</div>
</div>
</body>
</html>
A: Here's what's happening:
First, let's see the gaps (they don't appear in many cases):
.wrapper {
display: grid;
grid-auto-rows: minmax(auto, auto);
grid-template-areas: "hd hd ft" "sd . ft" "sd main ft" "sd . ft";
}
.header {
grid-area: hd;
}
.footer {
grid-area: ft;
}
.content {
grid-area: main;
}
.sidebar {
grid-area: sd;
}
* {
box-sizing: border-box;
}
.wrapper {
border: 2px solid #f76707;
border-radius: 5px;
background-color: #fff4e6;
max-width: 940px;
margin: 0 auto;
}
.wrapper > div {
border: 2px solid #ffa94d;
border-radius: 5px;
background-color: #ffd8a8;
padding: 1em;
color: #d9480f;
}
<div class="wrapper">
<div class="header">Header</div>
<div class="sidebar">Sidebar</div>
<div class="content">Content</div>
<div class="footer">Footer</div>
</div>
What you're seeing is the rendering of (non-breaking space) characters in the HTML code.
As white space characters, they're invisible, which makes them hard to detect. But once you remove them, the layout works as expected.
.wrapper {
display: grid;
grid-auto-rows: minmax(auto, auto);
grid-template-areas: "hd hd ft" "sd . ft" "sd main ft" "sd . ft";
}
.header {
grid-area: hd;
}
.footer {
grid-area: ft;
}
.content {
grid-area: main;
}
.sidebar {
grid-area: sd;
}
* {
box-sizing: border-box;
}
.wrapper {
border: 2px solid #f76707;
border-radius: 5px;
background-color: #fff4e6;
max-width: 940px;
margin: 0 auto;
}
.wrapper > div {
border: 2px solid #ffa94d;
border-radius: 5px;
background-color: #ffd8a8;
padding: 1em;
color: #d9480f;
}
<div class="wrapper">
<div class="header">Header</div>
<div class="sidebar">Sidebar</div>
<div class="content">Content</div>
<div class="footer">Footer</div>
</div>
Lastly, why doesn't the faulty layout display in many cases?
When you copy HTML code as rendered on a web page (e.g., copy the code directly from the question), the characters, being HTML code, have already been rendered. So only plain (collapsible) white space gets copied and the layout will appear to be working fine.
Also, if you copy the HTML code from some code editors in some browsers (e.g., the Stack Snippet editor on Edge), the characters don't get copied, either. I needed to copy the code from the jsFiddle editor in Chrome to finally see the problem.
Also, if you hit the "Tidy" button in the editor using the original code, spaces will be added between the lines.
A: The empty cells have to be inserted, you'll not get it by default, just adding the HTML and body tag fixed the bottom gap issue:
.header {
grid-area: hd;
}
.footer {
grid-area: ft;
}
.content {
grid-area: main;
}
.sidebar {
grid-area: sd;
}
.empty-cell1 {
grid-area: ec1;
}
.empty-cell2 {
grid-area: ec2;
}
* {box-sizing: border-box;}
.wrapper {
border: 2px solid #f76707;
border-radius: 5px;
background-color: #fff4e6;
max-width: 940px;
margin: 0 auto;
}
.wrapper > div {
border: 2px solid #ffa94d;
border-radius: 5px;
background-color: #ffd8a8;
padding: 1em;
color: #d9480f;
}
.wrapper {
display: grid;
grid-template-areas:
"hd hd ft"
"sd ec1 ft"
"sd main ft"
"sd ec2 ft";
}
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<div class="wrapper">
<div class="header">Header</div>
<div class="sidebar">Sidebar</div>
<div class="content">Content</div>
<div class="footer">Footer</div>
<div class="empty-cell1"></div>
<div class="empty-cell2"></div>
</div>
</body>
</html> | unknown | |
d14763 | val | The query in MongoDB looks like:
Database.collection_name.find(
// This is the condition
{
$and: [
{
$or: [
{province: 'nb'},
{province: 'on'}
]
},
{
city: "toronto"
},
{
first_name: "steven"
}
]
},
// Specify the fields that you need
{
first_name: 1,
_id: 1
}
)
Documentation for $and $or
Some examples and the official documentation for MongoDB find here. | unknown | |
d14764 | val | This problem has no decently easily implementable solution.
I gave up and used Sequence files which fit my requirements too.
A: Have you tried the following?
import org.apache.hadoop.mapreduce.lib.output;
...
LazyOutputFormat.setOutputFormatClass(job, MapFileOutputFormat.class); | unknown | |
d14765 | val | Looks like the alter table did not succeed. Try removing the comma:
alter table NDQ01 add column Date date;
Check out \h alter table in psql for more information.
To clarify based on other answers, according to the current docs, date is not a reserved word in Postgres, and can be set as a column name (tested on version 11.2):
=# \d+ ndq01;
Table "public.ndq01"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
--------+------+-----------+----------+---------+---------+--------------+-------------
date | date | | | | plain | |
SQL Key Words docs | unknown | |
d14766 | val | You could ask the same question about any store + load pair to different addresses: the load may be executed earlier internally than the older store due to out-of-order execution. In X86 this would be allowed, because:
Loads may be reordered with older stores to different locations but not with older stores to the same location
(source: Intel 64 Architecture Memory Ordering White Paper)
However, in your example, the lock perfix would prevent that, because (from the same set of rules):
Locked instructions have a total order
This means that the lock would enforce a memory barrier, like an mfence (and indeed some compilers use a locked operation as a fence). This will usually make the CPU stop the execution of the load until the store buffer has drained, forcing the store to execute first.
A:
since we need to hold the cache line for the atomic operation in
read-write state until all preceding stores from the write buffer are
committed, but, performance considerations aside
If you hold a lock L while you do operations S that are of same nature as those prevented by L, that is there exist S' that can be blocked (delayed) by L and S can be blocked (delayed) by L', then you have the recipe for a deadlock, unless you are guaranteed to be the only actor doing that (which would make the whole atomic thing pointless). | unknown | |
d14767 | val | If you want to maintain the order of api calls, you can user switchMap operator.
let's say that you have two api calls:
const apiCall1$ = this.http.get("/endpoint1");
const apiCall2$ = this.http.get("/endpoint2");
and you want to wait for apiCall1 to finish before sending apiCall2, you can do it like this:
apiCall1$.pipe(switchMap(() => apiCall2$)).subscribe({
next: (response) => {
//do something...
}
}); | unknown | |
d14768 | val | Update 2015-02-27 ~13:40 EDT: Appending hiddenfield to hyperlink serverside
I'm using GridView1 as I do not know the name of your gridview.
In the GridView1 RowDatabound Event (note the addition of "&hourDiff={7}" to the end of the format string and the addiition of the hiddenfield value in the parameter list):
Protected Sub GridView1_RowDataBound(sender As Object, e As System.Web.UI.WebControls.GridViewRowEventArgs) Handles GridView1.RowDataBound
If e.Row.RowType = DataControlRowType.DataRow Then
dim hl as HyperLink = if(e.Row.FindControl("siteId"), Nothing)
If hl IsNot Nothing then
hl.NavigateURL = String.Format("Reserve.aspx?id={0}&groupsize={1}" & _
"&facilityFees={2}&extrahour={3}&depoitAmt={4}&cancelAmt={5}" & _
"&keydeptAmt={6}&hourDiff={7}",
DataBinder.Eval("siteId"),
DataBinder.Eval("capacity"),
DataBinder.Eval("RentalFeeAmount"),
DataBinder.Eval("ExtraHourAmount"),
DataBinder.Eval("DepositAmount"),
DataBinder.Eval("CancellationAmount"),
DataBinder.Eval("KeyDepositAmount"),
hf1.Value )
End If
End If
End Sub
Problem 1: Each time I compile my code and click the Select link
the Select link in a Gridview is going to cause a postback and therefore hourDiff is going to be 0 everytime since a postback is going to force all js to be re-evaluated
Problem 2:
Be aware that every control that causes a postback is going to reset your page javascript. One way to get around that is to save to and restore from hidden field controls (<asp:HiddenField ID="hf1" runat="server" ClientIDMode="Static"...>). then you can access like this: $('#hf1').val(3.14); and the value is preserved between postbacks.
Cookies or local storage are other options
Also, is there any reason that the calculation must occur clientside? because you have another problem once you redirect to a new page. Be aware that redirecting, even to the same page, is not a postback.
Update 2015-02-26 14:30 EDT
I cannot see your search code so I'm going to make an assumption that no matter how the user picks a start/end date/time there is a Search button that causes a postback with four fields (assuming asp:TextBox's) of Search data, I'll call them: StartDate, StartTime, EndDate, EndTime.
The Search Button is going to cause a postback which should make the search fields available for use in the code behind.
Build 2 DateTime variables (S, E) and use the DataDiff() function to determine the difference you need.
In your case it would be something like this:
' At class level define the variable:
Dim hourDiff as DateTime = nothing
' In the gridview databinding event do your calculation
Private Sub GridView1_DataBinding(sender As Object, e As System.EventArgs) Handles GridView1.DataBinding
Dim S as New DateTime( <replace with relevant parameters> )
Dim E as New DateTime( <replace with relevant parameters> )
hourDiff = DataDiff(DateInterval.Hour, S, E)
End Sub
' Then in the row databound event append the difference to the hyperlink
Private Sub GridView1_RowDataBound(sender As Object, e As System.Web.UI.WebControls.GridViewRowEventArgs) Handles GridView1.RowDataBound
If e.Row.RowType = DataControlRowType.DataRow Then
dim hl as HyperLink = if(e.Row.FindControl("siteId"), Nothing)
If hl IsNot Nothing then
hl.NavigateURL = <build your Hyperlink and Append hourDiff>
End If
End If
End Sub
Amended 2015-02-26 18:06EDT
i would have preferred the option of using hiddenField if you could
explain how to tie hf1 to hourdiff.
Add a hidden field to your page like this:
<asp:HiddenField ID="hf1" runat="server" ClientIDMode="Static">
to js add this:
// define hourDiff
var hourDiff;
// Using jquery place an object reference to the hidden field into hourDiff
hourDiff = $("#hf1");
// Do your time calculations and assign results to hourDiff
hourDiff.val( time_calc_results );
// The above places the results in the hidden field Value property
// which will be available in the code behind as as `hf1.Value` after postback
I am spending more time trying to fix errors on the code you just posted.
Well that's to be expected as it's mostly from memory and is partially pseudo code.
With regards to your latest code, does that mean that I try the javascript
I posted at the top or use both?
The intent was that the search criteria could be selected Clientside or Serverside, the choice is yours. But you have to be careful of problem 1. you need a way to preserve the calculation between posts. Hence my suggestion for the Hidden Field.
Worst case Scenario: 5+ postbacks
In the worst case scenario you post back every time a user selects a time and date so at minimum you have 5 postbacks, one for each start and end date and time and then clicking search. Posting back like this makes it pointless to do the calculations client side. I hope you understand why. Ideally, in this situation the user has made all the selections prior to selecting Search and then you calculation once server side.
Scenarios 2: 1 Postback
In this scenario you have js or jQ code picker that allow the user to select times and dates WITHOUT posting back to the server. This is a great way to collect data.
*
*If Clientside, you need to make sure the calculation is complete and stored in the Hidden Field before you post back
*If Serverside you need to grab the four field and do the calculation in the Search button click event or in the Gridview DataBinding event.
In either case you need to modify the Hyperlink href. This could be done client side...maybe. It depends, if you cause a postback before getting a chance to update the href of the select link.
To me it looks like you added your own hyperlink to each row. If that is so then you should be able to modify all the hyperlinks with a bit jquery code after you calc hourDiff
// Assuming hourDiff is defined as
var hourDiff = $('#hf1');
... (other stuff)
// and you calc and assign to hourDiff as this:
hourDiff.val( endDate - startDate);
... (maybe more other stuff)
// then you modify your hyperlinks like this
$( "#GridView1 a.js_siteid" ).each( function( i, e ) {
this.href += "&diff=" + hourDiff.val();
} );
Also, I don't understand:
Dim E as New DateTime( <replace with relevant parameters> )
It's the VB equivalent of what you are doing in the js code with the Date objects. There are over a dozen ways to instantiate one so I just left it up to you to pick one that works for you. | unknown | |
d14769 | val | Try importing MatCardModule into you TestBed configuration.
import { MatCardModule } from '@angular/material/card';
beforeEach(async () => {
await TestBed.configureTestingModule({
imports: [
RouterTestingModule,
MatCardModule,
],
declarations: [LoginComponent],
providers: [{ provide: AuthService, useValue: authService }]
}).compileComponents();
}); | unknown | |
d14770 | val | I believe you'll want to use the library that doesn't give link errors.
The other errors you're getting are because you're linking mismatched code together.
Then focus on trying to determine what your platform identifier should be.
I think you were close but gave up too soon | unknown | |
d14771 | val | When you declare CountDash in the global scope, the code is only being run once, so CountDash is initialised with the value ' 0 - '. So even when you update count in your increment function, countDash will not be updated. If you'd like to keep countDash as a global variable for whatever reason (although we should reduce global variable use where possible) you can just update it after you update count in the increment function :) | unknown | |
d14772 | val | You can't without returning them as the function return value. In PHP, variables declared in a function (the arrays you're trying to print_r in this case) are only available within the scope of that function unless you declare them global with the global keyword.
Here's the details on variable scope in PHP: http://php.net/manual/en/language.variables.scope.php
You could construct a larger array to contain these arrays and return them from the test() function:
function test(){
//function code here....
////...
$results = array('str'=> $str,
'json'=> $json,
'base64'=>$base64,
'sig' => signMessage($base64, $secretPhrase)
) ;
return $results;
}
Then call it like this:
$results = test();
print_r($results['str']);
print_r($results['sjson']);
print_r($results['base64']);
print_r($results['sig']);
A: Many ways to do that:
first, you have to return the value if you want use on other class.
on your test you can do:
$something = new Conf();
$someelse = $something->test();
echo $someelse; | unknown | |
d14773 | val | The manual page has this to say:
find exits with status 0 if all files are processed
successfully, greater than 0 if errors occur.
This is deliberately a very broad
description, but if the return value is non-zero,
you should not rely
on the correctness of the results of find.
This is a bit of a conundrum, but if you can be reasonably sure that there will be no unrelated errors, you could do something like
find . -name '*tests*' -print -exec false \;
If you want the list of found files on standard error, add a redirection >&2 | unknown | |
d14774 | val | Currently in MongoDB we cannot do this directly, since we dont have any functionality supporting Permutation/Combination on the query parameters.
But we can simplify the query by breaking the condition into parts.
Use Aggregation pipeline
$project with records (A=a AND B=b) --> This will give the records which are having two conditions matching.(Our objective is to find the records which are having matches for 3 out of 4 or 4 out of 4 on the given condition)`
Next in the pipeline use OR condition (C=c OR D=d) to find the final set of records which yields our expected result.
Hope it Helps!
A: The way you have it you have to do all permutations in your query. You can use the aggregation framework to do this without permuting all combinations. And it is generic enough to do with any K. The downside is I think you need Mongodb 3.2+ and also Spring Data doesn't support these oparations yet: $filter $concatArrays
But you can do it pretty easy with the java driver.
[
{
$project:{
totalMatched:{
$size:{
$filter:{
input:{
$concatArrays:[ ["$A"], ["$B"], ["$C"],["$D"]]
},
as:"attr",
cond:{
$eq:["$$attr","a"]
}
}
}
}
}
},
{
$match:{
totalMatched:{ $gte:3 }
}
}
]
All you are doing is you are concatenating the values of all the fields you need to check in a single array. Then select a subset of those elements that are equal to the value you are looking for (or any condition you want for that matter) and finally getting the size of that array for each document.
Now all you need to do is to $match the documents that have a size of greater than or equal to what you want. | unknown | |
d14775 | val | The error message that you are getting is informing you that image that you are passing does not have 3 or 4 channels. This is the assestion that has failed.
This is because camera.capture function does not return any values (API Documentation). Instead the rawCapture gets updated, it is this that you should be passing to cvtColor.
Instead of
frame1 = cv2.cvtColor(camera.capture(rawCapture, format = "bgr", use_video_port = True), cv2.COLOR_RGB2GRAY)
Use
rawCapture.truncate()
camera.capture(rawCapture, format = "bgr", use_video_port = True)
frame1 = cv2.cvtColor(rawCapture.array, cv2.COLOR_BGR2GRAY)
And the same for each time you capture an image.
I haven't been able to test this as I don't currently have my Raspberry Pi and Camera on me but it should fix the problem.
A: I think you didn't close your camera, so python thinks that the camera is used by another program. Try to restart your Pi. The program should work after the restart. The second start of the program after the restart won't work. If this happens, close the camera in the last if-statement.
A: To save your time, I have built a completed application to detect motion and notify to the iOS/Android. The notification will have text, image, and video.
Check this out | unknown | |
d14776 | val | I recommend that you don't try to implement the functionality you describe manually just by using a fastapi and redis. Is a path of pain and suffering that is unjustified and highly ineffective.
Just use centrifugo and you'll be happy.
A: I recommend using queues to scale your real time application.
e.g. RabbitMQ or even rpush und lpop with redis lists - if you want stay with redis. this approch is much easier to implement as pub/sub and scales great.
Handling and sharing events bidirectional with Pub/Sub & WebSockets is a pain in most languages. | unknown | |
d14777 | val | Not exactly. The decorator syntax:
@register.filter
def a():
pass
is syntactic sugar for:
def a():
pass
a = register.filter(a)
So register.filter in this case will be called with the first positional argument, 'name' being your function. The django register.filter function handles that usage however and returns the right thing even if the filter is sent as the first argument (see the if callable(name) branch)
It's more common to make decorators that can take multiple arguments do so with the function to be decorated being the first positional argument (or alternately being function factories/closures), but I have a feeling the reason django did it this way was for backwards-compatibility. Actually I vaguely remember it not being a decorator in the past, and then becoming a decorator in a later django version.
A: No. Simple decorators take the function they decorate as a parameter, and return a new function.
a = register.filter(a) | unknown | |
d14778 | val | first at all, you'll never should place custom code into core files. This destroys your upgradabillity. Create your own custom modules under app/code/local. There you can create your model which extends from Mage_Eav_Model_Entity_Attribute_Backend_Abstract.
May this link helps you to create your module:
http://www.smashingmagazine.com/2012/03/01/basics-creating-magento-module/
Also you can use magerun (an cli tool for magento) to create a module: http://magerun.net/ | unknown | |
d14779 | val | You've missed the executenonquery(),so the query you have provided is niether executed.replace the below code and eeverything will be working.
cmd.Connection = cn
cmd.CommandText = "insert into StudResume values('" + TextBox1.Text + "','" + TextBox2.Text + "','" + DateTimePicker1.Value.ToShortDateString() + "'," + TextBox3.Text + ")"
cmd.ExecuteNonQuery()
cmd.Dispose()
cn.Close()
Ie Add cmd.ExecuteNonQuery() after providing the commandtext.
A: change your button3 code as the following, it will give u an error message in msgbox, post the error message here so may we can help,
If (TextBox1.Text <> "" And TextBox2.Text <> "" And TextBox3.Text <> "") Then
Try
cn.Open()
cmd.Connection = cn
cmd.CommandText = "insert into StudResume values('" + TextBox1.Text + "','" + TextBox2.Text + "','" + DateTimePicker1.Value.ToShortDateString() + "'," + TextBox3.Text + ")"
cmd.Dispose()
cn.Close()
MsgBox("Details saved Successfully", MsgBoxStyle.Information, "Done")
TextBox1.Text = ""
TextBox2.Text = ""
TextBox3.Text = ""
DateTimePicker1.Value = Now
TextBox1.Focus()
Catch ex As Exception
MsgBox(ex.Message)
Finally
End Try
Else
MsgBox("Please Enter Complete Details", MsgBoxStyle.Critical, "Error")
End If
A: Try this way
cn =new SqlConnection("Data Source=ROHAN-PC\SQLEXPRESS;initial catalog=Libeasy;Integrated Security=true") | unknown | |
d14780 | val | There's no way to capture the browser event for canceling a save file. Using a confirm in an if statement(or something similar) is probably the best ux for that situation:
if(confirm('are you sure you want to export?'))
{
//export code
}
else
{
//cancel code
}
If you want your export button to become re-enabled I would call a re-enable function whenever your user's search(or whatever action they make to change data) is called. Or you can also use setTimeout() after the export button is hit and re-enable it after a certain time period.
A: Maybe you can send the dump file through a php script. There you (again: maybe, I dont know, if it really works) can test the connection status with connection_status(). But if you send the file through a php-script, you dont need to know the status, because if the script shutdown properly, it doenst matter, if the transmission were completed, if you just want to unlock the database.
Usually a normal database dump is safe anyway. So if you let the database dump the data it contains and save it to a file, there is no reason to lock the database while someone downloads a file. | unknown | |
d14781 | val | Basically all you need to do is do what you said you want to do in a completion block - just remove all the items from the datasource and update the table. The UITableView datasource delegate methods will do the rest for you and empty the tableView.
A: Just use deleteRowsAtIndexPaths:withRowAnimation: method and apply code from a following question — How to detect that animation has ended on UITableView beginUpdates/endUpdates?
It will give you completion block functionality that you are looking, so you will be able to call reloadData in it.
For Swift it will look as following:
CATransaction.begin()
CATransaction.setCompletionBlock {
//Reload data here
}
tableView.beginUpdates()
//Remove cells here
tableView.endUpdates()
CATransaction.commit()
A: Some sample code below. The usual way would be to perform a big delete or add operation on a background thread and then use a notification to trigger the merge on the main thread. So the code below assumes the following:
*
*You have a main ManagedObjectContext which is used by the
FetchedResultsController in your TableView
*You have a helper function to launch the delete or load methods on
background threads
*You create background managedObjectContexts and register for
ContextDidSave notifications which you then use to merge the changes
into the main context
Helper function for calling load or delete.
- (void)deleteDataInBackground {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void) {
[self deleteData];
});
}
Load function
/* Loads the required seed data */
// Usually called on a background thread and therefor we need to process the DidSave notification
// to merge the changed with the main context so the UI gets updated
func loadData() {
//FLOG(" called");
let bgContext:NSManagedObjectContext = NSManagedObjectContext(concurrencyType: NSManagedObjectContextConcurrencyType.ConfinementConcurrencyType)
// Register for saves in order to merge any data from background threads
NSNotificationCenter.defaultCenter().addObserver(self, selector:"storesDidSave:", name: NSManagedObjectContextDidSaveNotification, object:bgContext)
while (persistentStoreCoordinator == nil) {
//FLOG(@" persistentStoreCoordinator = nil, waiting 5 seconds to try again...");
sleep(5);
}
bgContext.persistentStoreCoordinator = persistentStoreCoordinator
insertStatusCode(bgContext, number: 0, name: "Not started")
insertStatusCode(bgContext, number: 1, name: "Started on track")
insertStatusCode(bgContext, number: 2, name: "Behind schedule")
insertStatusCode(bgContext, number: 3, name: "Completed")
insertStatusCode(bgContext, number: 4, name: "Completed behind schedule")
insertStatusCode(bgContext, number: 5, name: "On hold or cancelled")
bgContext.processPendingChanges()
do {
try bgContext.save()
//FLOG(" Seed data loaded")
} catch {
//FLOG(" Unresolved error \(error), \(error?.userInfo)")
}
}
Code to insert new records
func insertStatusCode(moc:NSManagedObjectContext, number:Int, name:String)
{
//FLOG(" called")
if let newManagedObject:NSManagedObject = NSEntityDescription.insertNewObjectForEntityForName("StatusCode", inManagedObjectContext:moc) {
newManagedObject.setValue(number, forKey:"number")
newManagedObject.setValue(name, forKey:"name")
}
}
Code to process the notifications and merge the changes into the main context
// NB - this may be called from a background thread so make sure we run on the main thread !!
// This is when transaction logs are loaded
func storesDidSave(notification: NSNotification!) {
// Ignore any notifications from the main thread because we only need to merge data
// loaded from other threads.
if (NSThread.isMainThread()) {
//FLOG(" main thread saved context")
return
}
NSOperationQueue.mainQueue().addOperationWithBlock {
//FLOG("storesDidSave ")
// Set this so that after the timer goes off we perform a save
// - without this the deletes don't appear to trigger the fetchedResultsController delegate methods !
self.import_or_save = true
self.createTimer() // Timer to prevent this happening too often!
if let moc = self.managedObjectContext {
moc.mergeChangesFromContextDidSaveNotification(notification)
}
}
}
And here is a Obj-C delete function, note that there are some checks to make sure the objects have not been deleted by another thread...
- (void)deleteData {
FLOG(@"deleteData called");
_deleteJobCount++;
[self postJobStartedNotification];
FLOG(@" waiting 5 seconds...");
sleep(5);
[self showBackgroundTaskActive];
NSManagedObjectContext *bgContext = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType];
// Register for saves in order to merge any data from background threads
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(storesDidSave:) name: NSManagedObjectContextDidSaveNotification object:bgContext];
while (self.persistentStoreCoordinator == nil) {
FLOG(@" persistentStoreCoordinator = nil, waiting 5 seconds to try again...");
sleep(5);
}
bgContext.persistentStoreCoordinator = [self persistentStoreCoordinator];
FLOG(@" fetching data...");
NSArray *companies = [self getData:@"Company" sortField:@"name" predicate:nil managedObjectContext:bgContext];
NSUInteger count = companies.count;
if (count>2) {
for (int i = 0; i<3; i++) {
NSManagedObject *object = [companies objectAtIndex:i];
// Must wrap this incase another thread deleted it already
@try {
if ([object isDeleted]) {
FLOG(@" object has been deleted");
} else {
FLOG(@" deleting %@", [object valueForKey:@"name"]);
[bgContext deleteObject:object];
[bgContext processPendingChanges];
NSError *error = nil;
if (![bgContext save:&error]) {
FLOG(@" Unresolved error %@, %@", error, [error userInfo]);
}
}
}
@catch (NSException *exception) {
FLOG(@" error deleting object");
FLOG(@" exception is %@", exception);
}
FLOG(@" waiting 5 seconds...");
sleep(0.01);
}
}
[[NSNotificationCenter defaultCenter] removeObserver:self name: NSManagedObjectContextDidSaveNotification object:bgContext];
/*
dispatch_async(dispatch_get_main_queue(),^(void){
[[NSNotificationCenter defaultCenter] removeObserver:self name: NSManagedObjectContextDidSaveNotification object:nil];
});
*/
FLOG(@" delete ended...");
[self showBackgroundTaskInactive];
_deleteJobCount--;
[self postJobDoneNotification];
}
If you have large batched take a look at the Core Data batch functions. | unknown | |
d14782 | val | As already pointed out, it does not make sense to compute MSER on a binary image. MSER basically thresholds an image (grayscale) multiple times using increasing (decreasing) thresholds and what you get is a so called component tree like this here. The connected components which change their size/shape at least over the different binarizations are the so-calles Maximally Stable Extremal Regions (e.g. the K in the schematic graphic). This is of course a very simplified explanation. Please ask Google for more details, you'll find enough.
As you can see, thresholding an already thresholded image does not make sense. So pass the grayscale image to the MSER algorithm instead. MSER is a common basis for state-of-the-art text detection approaches (see here and here). | unknown | |
d14783 | val | According to https://www.w3schools.com/bootstrap/bootstrap_modal.asp, you can use the following to open and close the modal.
<div class="wrapper">
<!-- Modal button -->
<button id="modBtn" class="modal-btn" data-toggle="modal" data-target="#modal">Open Modal</button>
</div>
<!-- Modal -->
<div id="modal" class="modal">
<!-- Modal Content -->
<div class="modal-content">
<!-- Modal Header -->
<div class="modal-header">
<h3 class="header-title">Modal Header</h3>
<div class="close fa fa-close"></div>
</div>
<!-- Modal Body -->
<div class="modal-body">
<h3>Hello</h3>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-default" data-dismiss="modal">Close</button>
</div>
</div>
</div>
A: Your css and js references which are:
<link rel="stylesheet" href="~/Content/loginmodal.css">
<script src="~/Scripts/loginmodal.js"></script>
And the button to open it:
<div class="wrapper">
<!-- Modal button -->
<button id="modBtn" class="modal-btn">Open Modal</button>
</div>
should be placed inside Index.cshtml instead of the login partial view. | unknown | |
d14784 | val | You can change the Direction, to RTL. In CSS is just:
direction: rtl;
You can see a sample here. | unknown | |
d14785 | val | Need to narrow down the issue first:
*
*Check if having class preference and plugins and observer events that override or being triggered while processing Paypal gateway.
*Also check if the issue still exists by removing tax policy 10% US California customers
Hope it helps, thanks | unknown | |
d14786 | val | these selectors are generated by the compiler. they are the reserved selectors for c++ ivar construction and destruction.
furthermore, the runtime calls these methods for you when GCC_OBJC_CALL_CXX_CDTORS is enabled. there is no need to call or declare them yourself.
declaring them would result in a compilation error.
What can i do?
choose a unique name for your selectors, and don't implement the ones which are generated for you (when GCC_OBJC_CALL_CXX_CDTORS is enabled).
what is it you are trying to do here? | unknown | |
d14787 | val | You can fix this one by just setting 100% width on .editor.
Since the parent already has flex-wrap: wrap, this should work out just fine for you. The content below the editor will just wrap to below it.
.editor { /* <-- add this */
width: 100%;
}
body {
background-color: lightgrey;
}
.page-container {
width: 800px;
background-color: white;
}
.product-page {
display: flex; /* Disable me to make scrolling work */
flex-wrap: wrap;
}
.uploads-container {
text-align: left;
white-space: nowrap;
padding: 5px;
border: 1px solid black;
display: flex;
box-sizing: border-box;
}
.uploads-scroller {
overflow-x: scroll;
flex: 1 1 auto;
overflow-y: hidden;
}
.image-thumbs-container {
border: initial;
}
.image-thumb {
display: inline-block;
width: 100px;
border: solid 2px grey;
border-radius: 5px;
margin-right: 2px;
vertical-align: top;
}
<!DOCTYPE html>
<html lang="en">
<head>
<link rel="stylesheet" href="mock.css">
<title>Mockup</title>
</head>
<body>
<div class="page-container">
<div class="product-page">
<div class="unrelated-content">Page contains other content required to layout by flexbox.</div>
<div class="editor">
<div class="uploads-panel">
<div class="uploads-container">
<div class="uploads-file-container">File Upload<br>Widget Goes<br>Here</div>
<div class="uploads-scroller">
<div class="image-thumbs-container">
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
<div class="image-thumb">Thumb<br>Example</div>
</div>
</div>
</div>
</div>
<div class="workspace">
<h2>Some complicated workspace content goes here.</h2>
</div>
</div>
<div class="unrelated-content">Page contains other content required to layout by flexbox.</div>
</div>
</div>
</body>
</html>
A: Add width: 100% to your .editor element.
Or, just in case you really needed to, add max-width: 800px to any of the following elements... .editor, .uploads-container, or .uploads-container | unknown | |
d14788 | val | codeigniter is not 100% complete and chances are its not using all of mysql functions. instead of writing it like this:
$this->db->select("username, DES_DECRYPT(password) as password");
write it like this:
$this->db->select("username, " . decrypt(password) . " as password");
where decrypt is the php equivalent of DES_DECRYPT. The php equivalent is Mcrypt more info found here:
https://answers.yahoo.com/question/index?qid=20071201215913AAG8QKF
also avoid encrypting your password in such a way where it can be decrypted. its bad practice no one but the user should know their password. | unknown | |
d14789 | val | Change the position of the inlineFilters class and replace absolute with fixed :
.inlineFilters {
position: fixed;
z-index: 2;
padding: 5px;
background: #EFEFEF;
border-radius: 5px;
width: 188px;
} | unknown | |
d14790 | val | You can move the ListView in some other widget, I have made an example for reference:
class OtherPage extends StatelessWidget {
final List<String> items;
const OtherPage({Key key, this.items}) : super(key: key);
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('Details'),
),
body: Center(
child: ListView.builder(
shrinkWrap: true,
itemCount: items.length,
itemBuilder: (BuildContext ctxt, int index) {
return Card(
child: Padding(
padding: EdgeInsets.all(10),
child: Text(
"${(index + 1)}. " + items[index],
style: TextStyle(
fontSize: 20.0,
),
),
),
);
},
),
),
);
}
}
What you need to do is replace the Existing ListView.Builder from the column with the following Code:
RaisedButton(
child: Text('OtherPage'),
onPressed: () {
Navigator.of(context).push(
MaterialPageRoute(
builder: (context) => OtherPage(
items: _items,
),
),
);
},
),
Check this out and let me know in comments if you have any doubts. | unknown | |
d14791 | val | I feel the "(owned by Google)" comment suggests some alterer motive which I don't believe exists..
There are several different issues which may be at work here:
1) You seem to be testing Youtube.com using Mobile Safari, and testing the embedded player using a webview:
This is not a fair comparison, as the webview has been shown to be very significantly slower than safari. (e.g. http://www.guypo.com/mobile/ios-browsers-speed-bakeoff/ )
2) You do not make it clear exactly what load time you are measuring
Are you creating the webview in this time and setting up the browser environment?
Are you loading the document which includes the script tag for the embedded player code in this time?
Did Safari have a hot cache for youtube.com at the time you loaded the video?
Many browsers start loading requests in the background as you type, before you tap to load a page - these kind of performance tweaks can significantly reduce apparent time to load a site, but make comparisons difficult.
3) The common user story they are optimising for is different
In almost all situations where the YouTube player API is inserted into a page or a website, the video does not automatically start playing.
By comparison, almost every YouTube watch page load begins playing the video automatically.
One of the optimisations which YouTube have discussed in presentations before is in-lining the first part of the video stream into the page even before the video player loads.
This is a trade-off of extra user bandwidth for faster video playback which makes sense when you know the user is going to definitely play the video: however if they were to do this for embedded videos then it would significantly slow down the loading time for players with videos which did not always start playing - which is a fairly significant percentage of all of the websites in the world!
4) YouTube get involved at different points in the two cases, and can only start optimising the experience later for embedded players.
YouTube is able to do optimisations on youtube.com which it is unable to do on third party sites/apps using their html5 player API.
In the case of watching a video on youtube.com: YouTube are involved right from the first http request - they know which video you are playing, and they know which browser you say you are on (so can optimise the experience as much as possible to pre-load the correct video and/or video player).
In the case of embedding an iframe:
*
*The browser first loads your page, which it then has to parse and extract urls from.
*Then it makes a request for javascript from YouTube (note that if you are using the javascript player API rather than the embedded iframe direct then that needs to be cached on the browser, so YouTube still can't optimise performance of this video at this point)
*Once your entire page has finished loading, it can create an iframe pointing to YouTube (all of this takes time on the CPU and memory management time)
*At this point, YouTube can start optimising things to try to make the experience faster. | unknown | |
d14792 | val | No. contains() can't do anything other than use Object.equals, because that's required by the specification.
That's not to say that it's not reasonable to want a notion of contains for an array; merely that you can't overload the existing concept.
You can straightforwardly create a static method:
static <T> boolean containsArray(List<? extends T[]> list, T[] query) {
return list.stream().anyMatch(e -> Arrays.equals(e, query));
}
And then invoke this where you would otherwise invoke list.contains(query).
This has the advantage that it works for any list (with reference-typed array elements): you don't have to create it specially, merely update these specialized comparisons.
(The above would work for any reference-typed array. You'd need to specialize it for primitive-typed arrays).
It also has the advantage that you don't have to deal with the thorny consequences highlighted by Stephen C (e.g. how indexOf, remove etc work).
There's another alternative: use a list element type which supports equals "correctly". For example, you can wrap arrays using Arrays.asList to store them in the list, so you have a List<List<T>> instead of List<T[]>.
This would be quite an invasive change: it would require changing the type of the list throughout your code; you've not provided any indication of how pervasively-used your list of arrays is.
A:
I am wondering if there is a way to easily modify the contains method in the List interface in Java without creating a custom class.
There isn't a way. The contains method of the standard implementations of List behave as specified by the List API; i.e. they use the equals method.
The flip-side that it would not be hard to extend the ArrayList class and override contains to do what you want. But if you are doing it properly, you need to consider whether you want:
*
*indexOf and lastIndexOf to be consistent with contains
*the semantics of equals(Object) to be consistent with it
*the semantics of a list returned by sublist(int, int) to be consistent with the semantics of the main list. | unknown | |
d14793 | val | As DDP is based on WebSockets, you can actually monitor the transmitted data of those requests within the Chrome DevTools. To do so just switch the Network tab and then choose websocket from the list and click the Frames tab:
A: There is a Chrome extension that adds monitoring of Meteor DDP traffic to the Dev Tools: https://chrome.google.com/webstore/detail/ddp-monitor/ippapidnnboiophakmmhkdlchoccbgje
Source code: https://github.com/thebakeryio/meteor-ddp-monitor
A: Meteor Toys has a DDP monitor that logs to the console - which lets you view DDP data in any browser. | unknown | |
d14794 | val | Use uic to compile the .ui file would do. Some instruction here. You can also use qtcreator on linux, which includes the vim editing mode plugin.
A: AFAIK if you add all your files to your *.pro project file, qmake it and compile the result with cl everything should work fine. Just for the task of processing *.ui files you can use the Qt UI Compiler.
From experience I would say that trying to use Vim this way is a real challenge and I wish you good luck with that. In case you change your mind maybe you should know that Qt Creator has a Vim mode called FakeVim, maybe you should take a look at that as well.
Update:
You don't create a header file that inherits from the generated header, you create a class that inherits from or uses the generated class. Considering that, I would really recommend you use Qt Creator or, if you really want Vim, use FakeVim. Using Vim in this situation is hard and if you're not an advanced, or at least intermediate Vim user you will find it very painful. Vim is powerful but hard to setup for beginners. You will need plug-ins for autocomplete, project tree or neat jumps from header to source just to name a few and setting these up is not very user friendly/straight forward.
My advice: Use Qt Creator or FakeVim. | unknown | |
d14795 | val | You can use String.PadRight() to force the string to a specific size, rather than using tabs.
A: When you are using String.Format item format has following syntax:
{ index[,alignment][ :formatString] }
Thus you can specify alignment which indicates the total length of the field into which the argument is inserted and whether it is right-aligned (a positive integer) or left-aligned (a negative integer).
Also it's better to use StringBuilder to build strings:
var builder = new StringBuilder();
var employee = employees[number];
builder.AppendFormat("Notes {0,20} {1,10} {2,15}",
employee.Notes, employee.FirstNotes, employee.SecondNotes);
A: You would first have to loop over every entry to find the largest one so you know hoe wide to make the columns, something like:
var notesWidth = employees.Max(Notes.Length);
var firstNotesWidth = employees.Max(FirstNotes.Length);
// etc...
Then you can pad the columns to the correct width:
var output = new StringBuilder();
foreach(var employee in employees)
{
output.Append(employee.Notes.PadRight(notesWidth+1));
output.Append(employee.FirstNotes.PadRight(firstNotesWidth+1));
// etc...
}
And please don't do a lot of string "adding" ("1" + "2" + "3" + ...) in a loop. Use a StringBuilder instead. It is much more efficient. | unknown | |
d14796 | val | The error can be happening because the loading of Web3. Please, try this function:
async loadWeb3(){
if(window.ethereum){
window.web3 = new Web3(window.ethereum)
await window.ethereum.request({ method: 'eth_requestAccounts' })
}
else if(window.web3){
window.web3 = new Web3(window.ethereum)
}
else{
window.alert("Non-Ethereum browser detected. You should consider trying MetaMask!")
}
}
Also, do not forget to add the import on your javascript class:
import Web3 from 'web3'
and install the import with the npm:
npm i web3 --save | unknown | |
d14797 | val | Try hitting /resources/plans/getplans/242353 as defined in your web.xml | unknown | |
d14798 | val | Try it programmatically in .java file without changing xml file
SpannableString s = new SpannableString("your content");
s.setSpan(new BackgroundColorSpan(getResources().getColor(R.color.colorAccent)), 0, s.length(), Spanned.SPAN_INCLUSIVE_INCLUSIVE);
text.setText(s); | unknown | |
d14799 | val | With AJAX enabled pages, you should use the ScriptManager to register scripts:
ScriptManager.RegisterClientScriptBlock(Page, typeof(MyPage),
"MyScript", "GoStuff()", true)
You can use this to register all your scripts (Original load, postback, AJAX postback).
A: It works if you specify the UpdatePanel being updated on the AJAX call back. For example:
ScriptManager.RegisterClientScriptBlock(UpdatePanelMain, typeof(UpdatePanel),
UpdatePanelMain.ClientID,
"document.getElementById('imgLoading').style.display = 'none';" +
"document.getElementById('divMultiView').style.display = 'inline';",
true);
The control in the first argument must be inside the update panel or the update panel itself that triggers the update.
A: As far as I know for this you would be forced to be calling this method via a PostBack and not an ajax call. There may be OTHER ways of doing this, but it is not possible with Page.ClientScript....
A: In general, when loading external javascript after appending an element innerHTML with a block containing such script, one needs to evaluate (eval) the script in order for it to work properly and render itself into the current loaded document.
I'd suggest doing on of the following:
Use an external tool such as YUI get utility which is supposed to enable such behavior or do some evaluation for scripts yourself like this
A: If there's anyone else out there like myself, and the accepted answer STILL won't work for you, then look no further.
Check out this link - a simple class that pops a Javascript alert no matter if you're on page load, unload, AJAX request, etc:
WebMsgBox.Show("Your message here"); | unknown | |
d14800 | val | If I understand correctly, the problem is that SnapHelper snaps the center of the current view to the center of the recycler. As the docs say:
The implementation will snap the center of the target child view to the center of the attached RecyclerView.
If you want the the current view to snap to the start of the recycler use a libraray such as this one. Import it with gradle and activate it like this:
GravitySnapHelper(Gravity.START).attachToRecyclerView(recyclerview); | unknown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.