_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d6201 | train | I could not find relevant help via google that a total beginner (like me for C) could follow, so I will Q&A this topic.
*
*First of all you need an .ico file. Put it in the folder with your main.c file.
*In CodeBlocks go to File -> New -> Empty File and name it icon.rc. It has to be visible in the Workspace/Project otherwise CodeBlocks will not be aware of this file. It will show up there in a project folder called Resources .
*Put the following line in it: MAINICON ICON "filename.ico". MAINICON is just an identifier, you can choose something different. More info 1 & More info 2.
*Save the files and compile - CodeBlocks will do everything else for you
What will happen now, is windres.exe (the Resource Compiler) compiling the resource script icon.rc and the icon to an object binary file to obj\Release\icon.res. And the linker will add it to the executable.
It's so easy yet it took me quite a while to find it out - I hope I can save someone else having the same problem some time. | unknown | |
d6202 | train | Turned out to be because i had enableCrossAppRedirects="true" | unknown | |
d6203 | train | IP Addresses reserved for HSRP have the property isReserved as True and in note property the text “Reserved for HSRP.”
You can use the method SoftLayer_Network_Subnet::getIpAddresses with the following filter to get those IP Addresses:
objectFilter={'ipAddresses':{'note':{'operation':'Reserved for HSRP.'}}}
Below you can see an example in python.
"""
Get Ip Addresses of a subnet which are reserved for HSRP protocol.
Important manual pages:
http://sldn.softlayer.com/reference/services/SoftLayer_Network_Subnet/getIpAddresses
http://sldn.softlayer.com/reference/datatypes/SoftLayer_Network_Subnet_IpAddress
https://sldn.softlayer.com/article/object-Masks
https://sldn.softlayer.com/article/object-filters
License: http://sldn.softlayer.com/article/License
Author: SoftLayer Technologies, Inc. <[email protected]>
"""
import SoftLayer
from pprint import pprint as pp
# Your SoftLayer API username and key.
API_USERNAME = 'set-me'
API_KEY = 'set-me'
# The id of subnet you wish to get information.
subnetId = 135840
# Object mask helps to get more and specific information
mask = 'id,ipAddress,isReserved,note'
# Use object-filter to get Ip Addresses reserved for HSRP
filter = {
'ipAddresses': {
'note': {'operation': 'Reserved for HSRP.'}
}
}
# Call SoftLayer API client
client = SoftLayer.create_client_from_env(username=API_USERNAME, api_key=API_KEY)
try:
result = client['SoftLayer_Network_Subnet'].getIpAddresses(id=subnetId,
mask=mask,
filter=filter)
pp(result)
except SoftLayer.SoftLayerAPIError as e:
pp('Unable to get the Ip Addresses %s, %s' % (e.faultCode, e.faultString))
Links:
https://knowledgelayer.softlayer.com/articles/static-and-portable-ip-blocks
http://sldn.softlayer.com/reference/services/SoftLayer_Network_Subnet/getIpAddresses
http://sldn.softlayer.com/reference/datatypes/SoftLayer_Network_Subnet
I hope this help you.
Regards, | unknown | |
d6204 | train | I did a few tests and this is what i did so far. It works, all as expected. But i will not accept it as answer yet but leave for some time for a community to review. If someone sees problems with this approach, please point them out in comments.
ErrorMessage is of simple format:
{ message:string }
Service:
getPDF() {
return this.http.get(`${environment.baseUrl}/api/v.1/reports/...`, { responseType: ResponseContentType.Blob })
.map((res) => {
return {
blob: new Blob([res.blob()], { type: 'application/pdf' }), filename: this.parseFilename(res)
}
})
.catch((res) => {
let fileAsTextObservable = new Observable<string>(observer => {
const reader = new FileReader();
reader.onload = (e) => {
let responseText = (<any>e.target).result;
observer.next(responseText);
observer.complete();
}
const errMsg = reader.readAsText(res.blob(), 'utf-8');
});
return fileAsTextObservable
.switchMap(errMsgJsonAsText => {
return Observable.throw(JSON.parse(errMsgJsonAsText));
})
});
} | unknown | |
d6205 | train | Usually in this situation you need to use Activity.runOnUiThread()
Timer t = new Timer();
//Set the schedule function and rate
t.scheduleAtFixedRate(new TimerTask() {
@Override
public void run() {
//Called each time when 1000 milliseconds (1 second) (the period parameter)
runOnUiThread(new Runnable() {
public void run() {find(v);}
});
}
},
//Set how long before to start calling the TimerTask (in milliseconds)
0,
//Set the amount of time between each execution (in milliseconds)
10000); | unknown | |
d6206 | train | You should add the following to your .htaccess file:
<Files "wp-load.php">
Order Deny,Allow
Deny from all
Allow from localhost
Allow from 127.0.0.1
</Files>
I can't think of a reason to bootstrap WordPress from an external server .... | unknown | |
d6207 | train | Template columns render their own content. You would have to get each control and compare the two controls within the template, by using FindControl as you do and comparing the underlying value. Cell.Text is only useful for bound controls.
A: if (((Label)e.Row.FindControl("lblProblemName")).Text == "Heart problem")
{
DropDownList ddlAssignDoc
= (DropDownList)e.Row.FindControl("ddlAssignDoc");
ddlAssignDoc.DataSource = Cardio;
ddlAssignDoc.DataBind();
} | unknown | |
d6208 | train | The problem is related to the use of the EnableNotificationQueue method. In fact, as you can read at http://msdn.microsoft.com/en-us/library/windows/apps/windows.ui.notifications.tileupdater.enablenotificationqueue:
When queuing is enabled, a maximum of five tile notifications can
automatically cycle on the tile.
Try to pass false to this method.
According to the documentation, you can schedule up to 4096 notifications. Reref to http://hansstan.wordpress.com/2012/09/02/windows-8-advanced-tile-badge-topics/ for a working example. | unknown | |
d6209 | train | Until it will be fixed (see bug 7815), can be used this workaround:
SELECT uniqExact((id, date)) AS count
FROM table
ARRAY JOIN values
WHERE values.1 = 'pattern'
For the case when there are more than one Array-columns can be used this way:
SELECT uniqExact((id, date)) AS count
FROM
(
SELECT
id,
date,
arrayJoin(values) AS v,
arrayJoin(values2) AS v2
FROM table
WHERE v.1 = 'pattern' AND v2.1 = 'pattern2'
)
A:
values Array(Tuple(LowCardinality(String), Int32)),
Do not use Tuple. It brings only cons.
It's still *2 files on the disk.
It gives twice slowdown then you extract only one tuple element
https://gist.github.com/den-crane/f20a2dce94a2926a1e7cfec7cdd12f6d
valuesS Array(LowCardinality(String)),
valuesI Array(Int32) | unknown | |
d6210 | train | Try this:
*
*Open your web console.
*Read the message in console tab, the last one.
*If there is a message saying that $ is not defined, then add jQuery to your HTML file:
<script src="http://code.jquery.com/jquery-2.1.0.min.js"></script>
That's downloading jQuery from the internet when you load your page, or you can download that and put it with a local path:
<script src="/jquery-2.1.0.min.js"></script>
or
<script src="some/directory/jquery-2.1.0.min.js"></script>
If you need to understand what to put in src check this. | unknown | |
d6211 | train | An associative data structure with varying data types is exactly what a struct is...
struct SettingsType
{
bool Fullscreen;
int Width;
int Height;
std::string Title;
} Settings = { true, 1680, 1050, "My Application" };
Now, maybe you want some sort of reflection because the field names will appear in a configuration file? Something like:
SettingsSerializer x[] = { { "Fullscreen", &SettingsType::Fullscreen },
{ "Width", &SettingsType::Width },
{ "Height", &SettingsType::Height },
{ "Title", &Settings::Title } };
will get you there, as long as you give SettingsSerializer an overloaded constructor with different behavior depending on the pointer-to-member type.
A: C++ is a strongly typed language. The containers hold exactly one type of object so by default what you are trying to do cannot be done with only standard C++.
On the other hand, you can use libraries like boost::variant or boost::any that provide types that can hold one of multiple (or any) type, and then use a container of that type in your application.
Rather than an array, you can use std::map to map from the name of the setting to the value:
std::map<std::string, boost::variant<bool,int,std::string> >
A: #include <map>
#include <string>
std::map<std::string,std::string> settings;
settings.insert("Fullscreen","true");
settings.insert("Width","1680");
settings.insert("Height","1050");
settings.insert("Title","My Application");
Could be one way of doing it if you want to stick with the STL.
A: One solution could be to define the ISetting interface like:
class ISetting{
public:
virtual void save( IStream* stream ) = 0;
virtual ~ISetting(){}
};
after that you can use a map in order to store your settings:
std::map< std::string, ISetting* > settings;
One example of the boolean setting is:
class BooleanSetting : public ISetting{
private:
bool m_value;
public:
BooleanSetting(bool value){
m_value = value
}
void save( IStream* stream ) {
(*stream) << m_value;
}
virtual ~BooleanSetting(){}
};
in the end:
settings["booleansetting"]=new BooleanSetting(true);
settings["someothersetting"]=new SomeOtherSetting("something");
A: One possible solution is to create a Settings class which can look something like
class Settings {
public:
Settings(std::string filename);
bool getFullscreen() { return Fullscreen; }
// ...etc.
private:
bool Fullscreen;
int Width;
int Height;
std::string Title;
};
This assumes that the settings are stored in some file. The constructor can be implemented to read the settings using whatever format you choose. Of course, this has the disadvantage that you have to modify the class to add any other settings.
A: To answer your question, you could use boost::any or boost::variant to achieve what you would like. I think variant is better to start with.
boost::variant<
std::string,
int,
bool
> SettingVariant;
std::map<std::string, SettingVariant> settings;
To not answer your question, using typeless containers isn't what I would recommend. Strong typing gives you a way to structure code in a way that the compiler gives you errors when you do something subtly wrong.
struct ResolutionSettings {
bool full_screen;
size_t width;
size_t height;
std::string title;
};
Then just a simple free function to get the default settings.
ResolutionSettings GetDefaultResolutionSettings() {
ResolutionSettings settings;
settings.full_screen = true;
settings.width = 800;
settings.height = 600;
settings.title = "My Application';
return settings;
}
If you're reading settings off disk, then that is a little different problem. I would still write strongly typed settings structs, and have your weakly typed file reader use boost::lexical cast to validate that the string conversion worked.
ResolutionSettings settings;
std::string str = "800";
size_t settings.width = boost::lexical_cast<size_t>(str);
You can wrap all the disk reading logic in another function that isn't coupled with any of the other functionality.
ResolutionSettings GetResolutionSettingsFromDisk();
I think this is the most direct and easiest to maintain (especially if you're not super comfortable in C++). | unknown | |
d6212 | train | File permissions on a user's /home/user/.ssh directory must be 700, and the /home/user/.ssh/authorized_keys must be 600. Meanwhile, it is essential that all files in each .ssh directory are owned by the user in whose home directory they reside. To change ownership recursively, you can:
chown -R username:username /home/username/.ssh
If you have multiple users and need to do this for each of them, you can use this loop:
for SSHUSER in user1 user2 user3 user4 user5; do
# Add the authorized_keys file if it doesn't already exist
touch /home/$SSHUSER/.ssh/authorized_keys
# Set its permissions
chmod 600 /home/$SSHUSER/.ssh/authorized_keys
# Set directory permissions
chmod 700 /home/$SSHUSER/.ssh
# Set ownership for everything
chown -R $SSHUSER:$SSHUSER /home/$SSHUSER/.ssh
done; | unknown | |
d6213 | train | You could slice the ID column from df1 as a DataFrame and merge on ID:
import pandas as pd
df1 = pd.DataFrame({'ID': [1, 1, 2, 2, 3],
'A': [4, 4, 1, 2, 3]
})
df2 = pd.DataFrame({'ID': [1, 2, 3],
'B': [2, 2, 9]
})
merged = df1[['ID']].merge(df2, how='left')
This returns a DataFrame of the form:
ID B
0 1 2
1 1 2
2 2 2
3 2 2
4 3 9
A: perform the join and pick up only columns in df2
df2.merge(df1, on='ID')[df2.columns]
# output:
B ID
0 2 1
1 2 1
2 2 2
3 2 2
4 9 3 | unknown | |
d6214 | train | If you just want to declare a mock of your service instead of importing the entire SecurityConfig, you can easily do so by declaring this in your test config :
@Configuration
public class TestConfig {
@Bean
public PreAuthorizationSecurityService mockedSecurityService() {
//providing you use Mockito for mocking purpose
return mock(PreAuthorizationSecurityService.class);
}
}
And then set your mock to return true when needed. You can also provide your own mocked implementation that always return true.
That said, this is not the usual way to use Spring Security. And you should consider refactoring your app to use the standard role-based system, it will save you some trouble.
I don't know what your PreAuthorizationSecurityService looks like and if this can apply in your situation but in most cases, it should and that's what you should aim for.
With this standard role based approach, Spring Security Test (v4+ I think) easily allows you to mock connected user with given roles with annotation like @WithMockUser. | unknown | |
d6215 | train | I beleive this is equal. If not - could you provide sqlfiddle with some data and explanation?
SELECT
pt.id,
pt.parent_id,
SUM(IF(m.menge IS NULL,0,m.menge*p.preis_kostenanschlag)) as summe,
getBauNrKomplett(p.id) as bauNrKomplett,
FROM positionstyp pt
LEFT JOIN projektposition p
ON (p.positionstyp_id=pt.id)
LEFT JOIN menge m
ON (m.projektposition_id=p.id)
GROUP BY pt.id | unknown | |
d6216 | train | This looks like a permission error, since it is not able to write to your node_modules folder.
A: sometimes sudo doesn't work you just have to do su first then Enter, then type commands normally like tns plugin add nativescript-xxxxxxx | unknown | |
d6217 | train | So one answer to your question is that you're not necessarily looking for documentation for Webpack and React, but Babel (or similar transpiler) and React. Babel-loader (which is the loader you're using above) transpiles React's JSX format into javascript the browser can read via Webpack. Here's the babel-loader documentation.
Here are a few other resources that may help:
1) Setup React.js with Npm, Babel 6 and Webpack in under 1 hour
2) Setup a React Environment Using webpack and Babel
3) React JSX transform
4) And if it is of interest to you: React without JSX | unknown | |
d6218 | train | Use <f:viewParam> (and <f:event>) in the target view instead of @ManagedProperty (and @PostConstruct).
<f:metadata>
<f:viewParam name="eventCode" value="#{displayResults.eventCode}" />
<f:event type="preRenderView" listener="#{displayResults.init}" />
</f:metadata>
As a bonus, this also allows for more declarative conversion and validation without the need to do it in the @PostConstruct.
See also:
*
*ViewParam vs @ManagedProperty(value = "#{param.id}")
*Communication in JSF2 - Processing GET request parameters | unknown | |
d6219 | train | Create a job which checks out stuff from svn, like you would do for a job that does compilation.
Then create Excecute Windows batch command or Execute shell build step, where you put the command to run the java program, probably java -jar .... | unknown | |
d6220 | train | I think you need to do a better job of defining what, exactly it is that you want to compare. There's no such thing as a p value of a mean. What are you comparing, base pair variance between a gene in column 1 and one in column 2? Or is col. 1 the full sequence of one gene and col2 the full sequence of a second gene? Your question doesn't make it clear what you're analyzing, and without that you may have good math that means nothing.
Here's a good definition of t test, assuming that that test is, in fact, what you ought to be using. Note that this test requires not only the difference between the means (which you could calculate from what you showed us), the standard deviation of each mean (which you didn't), and the number of items (which you did). This means we only have 2 out of 3 of the necessary inputs. To get the 3rd, either you need to supply it, or you need to supply the raw data which produced it. | unknown | |
d6221 | train | I think it's a question of user rights. Your apache + php is probably launched by root. You have to set rights with root.
Two possibilities :
sudo su
chmod -R 777 app/cache
or
sudo chown -v app/cache
sudo chmod -R 777 app/cache
You will probably have to do the same thing with the log file.
My vagrant file if you need it :
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "precise64" #Box Name
config.vm.box_url = "http://files.vagrantup.com/precise64.box" #Box Location
config.vm.provider :virtualbox do |virtualbox|
virtualbox.customize ["modifyvm", :id, "--memory", "2048"]
end
config.vm.synced_folder ".", "/home/vagrant/synced/", :nfs => true
#config.vm.network :forwarded_port, guest: 80, host: 8080 # Forward 8080 rquest to vagrant 80 port
config.vm.network :private_network, ip: "1.2.3.4"
config.vm.network :public_network
config.vm.provision :shell, :path => "vagrant.sh"
end
vagrant.sh
#!/usr/bin/env bash
#VM Global Config
apt-get update
#Linux requirement
apt-get install -y vim git
#Apache Install
apt-get install -y apache2
#Apache Configuration
rm -rf /var/www
ln -fs /home/vagrant/synced/web /var/www
chmod -R 755 /home/vagrant/synced
#Php Install
apt-get install -y python-software-properties
add-apt-repository -y ppa:ondrej/php5
apt-get update
apt-get install -y php5 libapache2-mod-php5
#Php Divers
apt-get install -y php5-intl php-apc php5-gd php5-curl
#PhpUnit
apt-get install -y phpunit
pear upgrade pear
pear channel-discover pear.phpunit.de
pear channel-discover components.ez.no
pear channel-discover pear.symfony.com
pear install --alldeps phpunit/PHPUnit
#Php Configuration
sed -i "s/upload_max_filesize = 2M/upload_max_filesize = 10M/" /etc/php5/apache2/php.ini
sed -i "s/short_open_tag = On/short_open_tag = Off/" /etc/php5/apache2/php.ini
sed -i "s/;date.timezone =/date.timezone = Europe\/London/" /etc/php5/apache2/php.ini
sed -i "s/memory_limit = 128M/memory_limit = 1024M/" /etc/php5/apache2/php.ini
sed -i "s/_errors = Off/_errors = On/" /etc/php5/apache2/php.ini
#Reload apache configuration
/etc/init.d/apache2 reload
#Composer
php -r "eval('?>'.file_get_contents('https://getcomposer.org/installer'));"
mv -f composer.phar /usr/local/bin/composer.phar
alias composer='/usr/local/bin/composer.phar'
#Postgres
apt-get install -y postgresql postgresql-client postgresql-client php5-pgsql
su - postgres -c "psql -U postgres -d postgres -c \"alter user postgres with password 'vagrant';\""
A: An updated answer for nfs:
config.vm.synced_folder "www", "/var/www", type:nfs, :nfs => { :mount_options => ["dmode=777","fmode=777"] }
A:
Update as of 15th Jan 2016. Instructions for Vagrant 1.7.4+ and Symfony 3. This works.
On a fresh Ubuntu 14.04 install, ACL was installed but I couldn't use +a or setfacl to fix the permissions issues, and of course, as soon as you change any permissions in terminal in vagrant, they're reset to vagrant:vagrant again.
I added the following to my vagrant file:
# Symfony needs to be able to write to it's cache, logs and sessions directory in var/
config.vm.synced_folder "./var", "/vagrant/var",
:owner => 'vagrant',
:group => 'www-data',
:mount_options => ["dmode=775","fmode=666"]
This tells Vagrant to sync var/logs and var/cache (not to be confused with /var/, these are in the root Symfony directory) and have them owned by vagrant:www-data. This is the same as doing a sudo chown vagrant:www-data var/, except Vagrant now does it for you and enforces that instead of enforcing vagrant:vagrant.
Note there are no 777 'hacks' here.
As soon as I added that, I didn't get any more permissions errors in the apache log and I got a nice Symfony welcome screen. I hope that helps someone!
A: Nothing worked for me other than changing location of cache and logs folder to /tmp
AppKernel.php
public function getCacheDir()
{
if (in_array($this->getEnvironment(), ['test','dev'])) {
return '/tmp/sfcache/'.$this->getEnvironment();
}
return parent::getCacheDir();
}
public function getLogDir()
{
if (in_array($this->getEnvironment(), ['test','dev'])) {
return '/tmp/sflogs/'.$this->getEnvironment();
}
return parent::getLogDir();
} | unknown | |
d6222 | train | select from master table and make it LEFT JOIN with clicks table.
A: a LEFT JOIN works for your query
CREATE TABLE products (
`ID` INTEGER,
`NAME` VARCHAR(14)
);
INSERT INTO products
(`ID`, `NAME`)
VALUES
('0', 'first product'),
('1', 'second product'),
('2', 'thirdproduct'),
('3', 'forth product');
CREATE TABLE ciicks (
`PRODUCT_ID` INTEGER,
`CLICKS` INTEGER
);
INSERT INTO ciicks
(`PRODUCT_ID`, `CLICKS`)
VALUES
('0', '1'),
('1', '3');
SELECT p.ID,p.NAME,IFNULL(c.CLICKS,0)
FROM products p LEFT JOIN ciicks c ON p.ID = c.PRODUCT_ID
ID | NAME | IFNULL(c.CLICKS,0)
-: | :------------- | -----------------:
0 | first product | 1
1 | second product | 3
2 | thirdproduct | 0
3 | forth product | 0
db<>fiddle here
A: Select p.id, p.name, ifnull(c.clicks,0) as clicks from product p
left join click c on p.id = c.product_id
A: You can use LEFT JOIN (https://www.mysqltutorial.org/mysql-left-join.aspx/) to combine tables where one table may not have matching rows. See http://sqlfiddle.com/#!9/db9d5a/4 for an example:
SELECT
p.id,
p.name,
IFNULL(c.clicks, 0)
FROM
products p
LEFT JOIN clicks c ON c.product_id = p.id
ORDER BY c.clicks DESC;
will return the following:
id | name | clicks
4 fourth 5
2 second 3
1 first 1
3 third 0 | unknown | |
d6223 | train | I found solution. What i created:
Javascript
$('.class').addClass('blink'); <-Start some animation.
$('.class').on('webkitTransitionEnd', function() { <-When animation end.
$(.class).addClass('paused'); <-Stop animation.
$(.class).addClass('a-finish'); <-Start finish animation.
}
Css
.blink {
...some blik animation
}
.paused {
-webkit-animation-play-state:paused;
-moz-animation-play-state:paused;
animation-play-state:paused;
}
.a-finish {
-webkit-animation: 5s linear 0s normal none 1 wrap-done;
}
@-webkit-keyframes wrap-done {
0% { box-shadow: 0 9px 4px rgba(255, 255, 255, 1) inset;}
100% { box-shadow: 0 9px 4px rgba(255, 255, 255, 0) inset;}
}
And this is not work!!
So if animation is paused by animation-play-state:paused; we cant add new. So i just use removeClass with previous animation and start new for finish. | unknown | |
d6224 | train | SweetAlert uses promises to keep track of how the user interacts with the alert.
If the user clicks the confirm button, the promise resolves to true. If the alert is dismissed (by clicking outside of it), the promise resolves to null. (ref)
So, as there guide
function areYouSureEdit() {
swal({
title: "Are you sure you wish to edit this record?",
type: "warning",
showCancelButton: true,
confirmButtonColor: '#DD6B55',
confirmButtonText: 'Yes!',
closeOnConfirm: false,
}.then((value) => {
if(value){
//bring edit page
}else{
//write what you want to do
}
}) };
function areYouSureDelete() {
swal({
title: "Are you sure you wish to delete this record?",
type: "warning",
showCancelButton: true,
confirmButtonColor: '#DD6B55',
confirmButtonText: 'Yes, delete it!',
closeOnConfirm: false,
}.then((value) => {
if(value){
//ajax call or other action to delete the blog
swal("Deleted!", "Your imaginary file has been deleted!", "success");
}else{
//write what you want to do
}
})); }; | unknown | |
d6225 | train | There is no such thing as "converting the bytes into hexadecimal". The actual data is invariant and consists from binary ones and zeros. Your interpretation to these bits can be different, according to your needs. E.g., it can be interpreted as text character or decimal or hexadecimal or whatever value.
E.g.:
Binary 01010101 = decimal 85 = hexadecimal 55 = octal 125 = 'U' ASCII character.
A: A crude and simple implementation is to split the byte into two nibbles and then use each nibble as an index into a hex character "table".
; cdecl calling convention (google if you're not familiar with)
HEX_CHARSET db '0123456789ABCDEF'
; void byteToHex(byte val, char* buffer)
proc byteToHex
push bp
mov bp,sp
push di
mov dx,[word ptr ss:bp + 4] ; the address of val
mov di,[word ptr ss:bp + 6] ; the address of buffer
; high nibble first
mov ax,dx
mov cl,4
shr al,cl
push ax
call nibbleToHex
add sp,4
stosb
; low nibble second
mov ax,dx
push ax
call nibbleToHex
add esp,4
stosb
pop di
mov sp,bp
pop bp
ret
endp byteToHex
; char nibbleToHex(byte nibble)
proc nibbleToHex
push bp
mov bp,sp
push si
mov ax,[word ptr ss:bp + 4]
and ax,0Fh ; Sanitizing input param
lea si,[ds:HEX_CHARSET]
add si,ax
lodsb
pop si
mov sp,bp
pop bp
ret
endp nibbleToHex
A: A hexadecimal digit has 4 bits in it. A byte has 8 bits in it or 2 hex digits.
To display a byte in hex, you need to separate each of those two 4-bit halves and then convert the resultant value of each (which, unsurprisingly, will be from 0 to 24-1, IOW, from 0 to 15 or from 0 to 0FH) to the corresponding ASCII code:
0 -> 48 (or 30H or '0')
1 -> 49 (or 31H or '1')
...
9 -> 57 (or 39H or '9')
10 (or 0AH) -> 65 (or 41H or 'A')
11 (or 0BH) -> 66 (or 42H or 'B')
...
15 (or 0FH) -> 70 (or 46H or 'F')
Once you've converted a byte into two ASCII characters you can call the appropriate API (system call) of your OS to display those characters either one by one or as a string (you'll probably need to append a zero byte after those two characters to make a string).
That's all.
A: The instructions explicitly say you're supposed to write this yourself!
; push ax ; byte in al
; push outbuf
; call Byte2Hexadecimal
; add sp, 4
Byte2Hexadecimal:
push bp
mov bp, sp
push di
mov di, [bp + 4] ; buffer to put it
mov ax, [bp + 6] ; we're only interested in al
mov ah, al ; make a copy
mov cl, 4 ; ASSume literal 8086
shr al, cl ; isolate high nibble first
add al, '0' ; '0'..'9'
cmp al, '9' ; or...
jbe skip
add al, 7 ; 'A'..'F'
skip:
stosb
mov al, ah ; restore our al from copy
and al, 0Fh ; isolate low nibble
add al, '0' ; etc...
cmp al, '9'
jbe skip2
add al, 7
skip2:
stosb
pop di
mov sp, bp
pop bp
ret
Untested(!)... something like that... (probably want to zero-terminate (or '$' terminate?) your buffer).
Insanely short way to convert nibble to hex ascii
cmp al, 0Ah
sbb al, 69h
das
You probably don't want to figure that one out... and das is dog slow anyway...
Now: What assembler? What OS? | unknown | |
d6226 | train | Martijn's advice to use glob.glob is good for general shell wildcards, but in this case it looks as if you want to add all files in a directory to the ZIP archive. If that's right, you might be able to use the -r option to zip:
directory = 'example'
subprocess.call(['zip', '-r', 'example.zip', directory])
A: Because running a command in the shell is not the same thing as running it with subprocess.call(); the shell expanded the example/* wildcard.
Either expand the list of files with os.listdir() or the glob module yourself, or run the command through the shell from Python; with the shell=True argument to subprocess.call() (but make the first argument a whitespace-separated string).
Using glob.glob() is probably the best option here:
import glob
import subprocess
subprocess.call(['zip', 'example.zip'] + glob.glob('example/*'))
A: Try shell=True. subprocess.call('zip example.zip example/*', shell=True) would work. | unknown | |
d6227 | train | please suggest a way to find its location
Try
whereis crontab | unknown | |
d6228 | train | say all your array was in a variable $myArray, then
myArray[1]
will give you your first array | unknown | |
d6229 | train | You can use a text field rather than a text view and set its preferredMaxLayoutWidth property.
By default, if preferredMaxLayoutWidth is 0, a text field will compute its intrinsic size as though its content were laid out in one long line (or, at least, without any maximum width). Even if you apply a constraint that limits its actual width, that doesn't change its intrinsic height and therefore it typically won't be tall enough to contain the text as wrapped.
If you set preferredMaxLayoutWidth, then the text field will compute its intrinsic size based on the text as wrapped to that width. That includes making its intrinsic height tall enough to fit. | unknown | |
d6230 | train | According to your description, you can try to install a new self-hosted agent in your Linux server.
And then in your CI pipeline, you can use the git clone command to clone the repo in your Linux server.
You can also use the copy files task to copy the folder of the repo the to the UNC path. | unknown | |
d6231 | train | Found answer my self first I generated the thumbnail of the video by thumbnail package(https://pub.dev/packages/video_thumbnail) from then saved created a model of thumbnail path and video path saved the path of both and accessed them :) | unknown | |
d6232 | train | Check out how we solved this by overriding the dispatch methods in Activity. | unknown | |
d6233 | train | It's just that the definition of the displayed plot is a bit better: retina quality. Any display with retina resolution will make the figures look better - if your monitor's resolution is sub-retina than the improvement will be less noticeable. | unknown | |
d6234 | train | you have to options:
*
*declare PatientClinicalTabComponent in PatientModule(and nowhere else). Just use it inside PatientModule
*create e new module called PatientClinicalTabModule. Declare PatientClinicalTabComponent inside PatientClinicalTabModule and then import PatientClinicalTabModule inside PatientModule
this will solve your problem | unknown | |
d6235 | train | You could assign a different writer to System.out (assuming that's where your output goes) and inspect what gets written there. In general, you probably want to make the writer a parameter of printSummary or inject it into the class somehow.
A: So basically you want to do this:
@Test
public void testPrintSummaryForPatient() {
Patient patient_adult=new Patient("Ted",24,1.90,70.0,"Leicester");
surgery_N.printSummaryForPatient("Ted");
}
But can't do any asserts, because the Patient is not returned.
Do you want to return the patient?:
public Patient printSummaryForPatient(String name){
Patient p = findPatient(name);
p.printPatientSummary();
p.computeBMI();
return p;
}
After that you could use your assertions. It seems more like a conceptual problem of how you organize your methods.
You have methods in printSummaryForPatient, that don't seem to do anything. Their return value is not returned or saved. | unknown | |
d6236 | train | SparseArray is exactly for values which are of an unknown range. So it seems to fit your need.
A: in R.java the resrource ids are all integer so there is no problem using sparse array.
A: Do not use the SparseArray together with the Resourse IDs as keys. SparseArraysorts keys in ascending order for efficient access. Since resource ids are generated automatically, you cannot guarantee their order within SparseArray. This means that when you iterate over the SparseArray, you will not follow the same order as you would when filling in the FW.
I think you should use LinkedHashMap instead. | unknown | |
d6237 | train | You can make customize.
Simple example
<!DOCTYPE html>
<html>
<style>
/* The container */
.container {
display: block;
position: relative;
padding-left: 35px;
margin-bottom: 12px;
cursor: pointer;
font-size: 22px;
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
/* Hide the browser's default radio button */
.container input {
position: absolute;
opacity: 0;
cursor: pointer;
}
/* Create a custom radio button */
.checkmark {
position: absolute;
top: 0;
left: 0;
height: 25px;
width: 25px;
background-color: #eee;
border-radius: 50%;
}
/* On mouse-over, add a grey background color */
.container:hover input ~ .checkmark {
background-color: #ccc;
}
/* When the radio button is checked, add a blue background */
.container input:checked ~ .checkmark {
background-color: #2196F3;
}
/* Create the indicator (the dot/circle - hidden when not checked) */
.checkmark:after {
content: "";
position: absolute;
display: none;
}
/* Show the indicator (dot/circle) when checked */
.container input:checked ~ .checkmark:after {
display: block;
}
/* Style the indicator (dot/circle) */
.container .checkmark:after {
top: 9px;
left: 9px;
width: 8px;
height: 8px;
border-radius: 50%;
background: white;
}
</style>
<body>
<h1>Custom Radio Buttons</h1>
<label class="container">One
<input type="radio" checked="checked" name="radio">
<span class="checkmark"></span>
</label>
<label class="container">Two
<input type="radio" name="radio">
<span class="checkmark"></span>
</label>
<label class="container">Three
<input type="radio" name="radio">
<span class="checkmark"></span>
</label>
<label class="container">Four
<input type="radio" name="radio">
<span class="checkmark"></span>
</label>
</body>
</html> | unknown | |
d6238 | train | You can use grepl() to create a boolean condition to filter your vector. Here's a reproducible example:
vec <- c("ABC", "DEF", "A_C", "GHI", "JK_")
vec[!(grepl("_", vec))]
#> [1] "ABC" "DEF" "GHI"
Created on 2020-05-15 by the reprex package (v0.3.0) | unknown | |
d6239 | train | for the total
var sum = 0;
$(".total").each(function(index,value){
sum = sum + parseFloat($(this).find('input[name="total1"]').val());
});
//Sum in sum variable
console.log(sum);
Apply the same for days !
Working Fiddle : Fiddle | unknown | |
d6240 | train | OP's 1st Question:
Does any program compiled with the -g command have its source code available for gbd to list even if the source code files are unavailable??
No. If there is no path to the sources, then you will not see the source.
OP's 2nd Question:
[...] when you set the breakpoints at a line in a program with a complicated multi source file structure do you need the names of the source code files??
Not always. There are a few ways of setting breakpoints. The only two I remember are breaking on a line or breaking on a function. If you wanted to break on the first line of a function, use
break functionname
If the function lives in a module
break __modulename_MOD_functionname
The modulename and functionname should be lowercase, no matter how you've declared them in the code. Note the two underscores before the module name. If you are not sure, use nm on the executable to find out what the symbol is.
If you have the source code available and you are using a graphical environment, try ddd. It stops me swearing and takes a lot of guesswork out of gdb. If the source is available, it will show up straight away. | unknown | |
d6241 | train | Once you set the value of i=atoms, it no longer changes. It is the loop initializer, and will no longer be processed.
"i" of course will be decremented continuously (because of the i-- decrement).
But you can change the value of atoms to whatever and the results will not change.
A: i=atoms is the initialization in the for loop. So then on, value of i independent of atoms.
A: yes you have answered your own question. the variable i and atoms are two separate instances.
when you start the loop you are setting i equal to the same value as atoms but they are still separate variables. therefore inside the loop when you change the value of one it does not affect the other. | unknown | |
d6242 | train | If you're using NodeJS, you can use fs to check if a file exists or not.
if (!fs.existsSync(path)) {
// file doens't exists
} else {
// file does exists
}
If you're not using NodeJS you can setup a simple localhost server and send a request to that to check if it does exists with fs.
If you're using electron (idk which framework you're using) you can use the Electron ipc to send messages from the main process to the renderer process.
A: You could check if the file exists using the FileReader object from phonegap. You could check the following:
var reader = new FileReader();
var fileSource = <here is your file path>
reader.onloadend = function(evt) {
if(evt.target.result == null) {
// If you receive a null value the file doesn't exists
} else {
// Otherwise the file exists
}
};
// We are going to check if the file exists
reader.readAsDataURL(fileSource);
If that one does work check the comments of this post:
How to check a file's existence in phone directory with phonegap (This is were I got this answer from) | unknown | |
d6243 | train | You just need to sprinkle some more async over it.
As written, the iterable_content generator blocks the reactor until it finishes generating content. This is why you see no results until it is done. The reactor does not get control of execution back until it finishes.
That's only because you used time.sleep to insert a delay into it. time.sleep blocks. This -- and everything else in the "asynchronous" application -- is really synchronous and keeps control of execution until it is done.
If you replace iterable_content with something that's really asynchronous, like an asynchronous generator:
async def iterable_content():
for _ in range(5):
await asyncio.sleep(1)
yield b"a" * CHUNK_SIZE
and then iterate over it asynchronously with async for:
async def application(send):
async for part in iterable_content():
await send(
{
"body": part,
"more_body": True,
}
)
await send({"more_body": False})
then the reactor has a chance to run in between iterations and the server begins to produce output chunk by chunk. | unknown | |
d6244 | train | Of course, we can use rewritemap to check and replace the value. You could modify the rule below to achieve your requirement.
<rewriteMaps>
<rewriteMap name="StaticMap">
<add key="aaaaaaaaa" value="bbbbbbbb" />
</rewriteMap>
</rewriteMaps>
<outboundRules>
<rule name="rewritemaprule">
<match serverVariable="HTTP_IV-USER" pattern="(.*)" />
<conditions>
<add input="{StaticMap:{HTTP_IV-USER}}" pattern="(.+)" />
</conditions>
<action type="Rewrite" value="{C:1}" />
</rule>
</outboundRules> | unknown | |
d6245 | train | simply use handlers.
handler has a method called sendMessageDelayed(Message msg, long delayMillis).
just schedule your messages at the interval of 2 seconds.
here is a sample code.
int i=1;
while(i<5){
Message msg=Message.obtain();
msg.what=0;
hm.sendMessageDealayed(msg, i*2);
i++;
}
now this code will call handler's method handleMessage after every 2 seconds.
here is your Handler
Handler hm = new Handler(){
public void handleMessage(Message msg)
{
//Toast code.
}
};
and you are done.
Thanks.
A: Handlers are definitely the way to go but I would just postDelayed instead of handling an empty message.
Also extending Toast and creating a method for showing it longer is nice.
Sample Code:
// make sure to declare a handler in the class
private final Handler mHandler = new Handler();
// The method to show longer
/**
* Show the Toast Longer by repeating it.
* Depending upon LENGTH_LONG (3.5 seconds) or LENGTH_SHORT (2 seconds)
* - The number of times to repeat will extend the length by a factor
*
* @param number of times to repeat
*/
public void showLonger(int repeat) {
// initial show
super.show();
// to keep the toast from fading in/out between each show we need to check for what Toast duration is set
int duration = this.getDuration();
if (duration == Toast.LENGTH_SHORT) {
duration = 1000;
} else if (duration == Toast.LENGTH_LONG) {
duration = 2000;
}
for (int i = 1; i <= repeat; i++) {
// show again
handler.postDelayed(new Runnable() {
@Override
public void run() {
show();
}
}, i * duration);
}
} | unknown | |
d6246 | train | You can temporary disable checking CORS with extension for browser:
Chrome:
Allow-Control-Allow-Origin: *
For Opera you should install:
1)Extension allows you to install extensions from Chrome Web Store
2)Allow-Control-Allow-Origin: * | unknown | |
d6247 | train | Thank you for the help! I used:
$order = Mage::getModel('sales/order')->load(entity_id);
$paymentInfo = Mage::helper('payment')->getInfoBlock($order->getPayment())
->setIsSecureMode(true);
$channelOrderId = $paymentInfo->getChannelOrderId();
A: You should create models for the tables (if there aren't any available already)
How to create a model
Then you can simply get an instance of the model
$ebay = Mage::getModel('m2epro/ebay')->load($row_id);
echo $ebay->getData('ebay_order_id'); | unknown | |
d6248 | train | I did it as shown below.It works fine. Hurray :D
<tr ng-repeat="item in My.Items">
<td data-title="'MyColumn'" sortable="'Value'">
<span ng-if="(item.Value | uppercase) == 'NO'">{{item.Value}}</span>
<span ng-if="(item.Value | uppercase) == 'YES'">{{item.Value}}</span>
</td>
</tr> | unknown | |
d6249 | train | You are almost where near to answer
Try below code in sidenav-autosize-example.html
<mat-icon mat-list-icon style="font-size: 150px; height: 150px;color: rgba(244, 92, 27, 0.356);margin: 0 auto;">account_circle</mat-icon>
<span style="position:relative;top:75px;right:20px">Current Username</span>
<a mat-list-item href="#">Link 2</a>
<a mat-list-item href="#">Link 3</a>
</mat-nav-list>
Live Demo
Hope it will solve your problem | unknown | |
d6250 | train | SQL is used to apply the current SQL Dialect for that file (in case if you do not know: you can configure the IDE to have different dialects on per file/folder basis).
To have two dialects in the same file:
*
*Do not use SQL as an identifier if you will be changing it across the project (as it will use current SQL Dialect for that file).
I mean: you can use it, not an issue; but do not get lost/confused if you change the dialect later for that folder/file or for the whole project.
*Create and use more specific identifiers instead that will instruct the IDE to use a specific dialect there.
It's easy: just clone and adjust a bit a Language Injection rule for the bundled SQL identifier:
*
*Settings (Preferences on macOS) | Editor | Language Injections
*Clone existing rule for <<<SQL and adjust as needed (or create new one from scratch using the right type)
As you may see from this example, string for PostgreSQL complains on syntax (there is no DB attached to the project, hence unknown table table):
In-place language injection via @lang also works. NOTE that the injection must be placed just before the string content, not before the variable etc.
$sql1 = /** @lang MySQL */'SELECT * FROM `table`';
$sql2 = /** @lang PostgreSQL */'SELECT * FROM `table`'; | unknown | |
d6251 | train | You can perform this by creating directive, which detects changes & places the decimal separator at the good place.
i'll try to take some time to make an example if you need it.
EDIT :
Sorry for the late answer, i spent much time on it and i couldn't get it to work as good as expected, i encountered issues with change/keydown events (only firering once on inputNumbers...)
here's the best i've got, it's a base of work. Sorry i have no more time to search on it.
https://stackblitz.com/edit/primeng-inputnumber-demo-reji8s?file=src%2Fapp%2Fapp.component.html
If you find a nice solution, please tell me, i'm interrested to know now :) | unknown | |
d6252 | train | Below is for BigQuery Standard SQL
#standardSQL
SELECT DUID, AVG(TOTEXP15) AS famAverage
FROM `OmniHealth.new2015Data`
GROUP BY DUID
HAVING MIN(BMINDX53) >=0 AND MAX(BMINDX53) <=25
AND MIN(ADSMOK42) = -1 AND MAX(ADSMOK42) = -1
AND MIN(FCSZ1231) = 7 AND MAX(FCSZ1231) = 7
A: Consider joining two aggregate query derived tables that matches on count to align all household members to all household members with specific conditions.
SELECT AVG(t1.famTotal) as famTotal
FROM
(SELECT DUID, Count(*) As GrpCount, SUM(TOTEXP15) as famTotal
FROM `OmniHealth.new2015Data`
GROUP BY DUID) As t1
INNER JOIN
(SELECT DUID, Count(*) As GrpCount
FROM `OmniHealth.new2015Data`
WHERE BMINDX53 BETWEEN 0 AND 25
AND ADSMOK42 = -1
AND FCSZ1231 = 7
GROUP BY DUID) As t2
ON t1.DUID = t2.DUID AND t1.GrpCount = t2.GrpCount | unknown | |
d6253 | train | I agree that it is probably the blur event on the input that causes the keyboard to go away.
You could solve this with a directive on the button that refocuses on the input following a click (although I have no way to verify whether this would cause a flicker with a keyboard).
Here's an illustrative example where you pass the ID of the element that you want to refocus to:
app.directive("refocus", function() {
return {
restrict: "A",
link: function(scope, element, attrs) {
element.on("click", function() {
var id = attrs.refocus;
var el = document.getElementById(id);
el.focus();
});
}
}
});
and the usage is:
<input id="foo" ng-model="newItem">
<button ng-click="doSomething(newItem)" refocus="foo">add</button>
plunker | unknown | |
d6254 | train | There's plenty of opportunity to configure your Legend and Series, but when you call DataBindCrossTable, you're delegating everything to this method. The only thing you're left with is to overwrite whatever you want after the fact.
So, right after you call DataBindCrossTable, you can for instance, simply do:
foreach (Series s in chrtValuesByWeekByYear.Series)
s.Name = s.Name.Remove(0, 7); | unknown | |
d6255 | train | I'm making two assumptions:
*
*Site B, week 4 = 2 species, both "dog" and "rabbit"; and
*All sites share the same weeks, so if at least on site has week 4, then all sites should include it. This only drives the mt (empty) variable, feel free to update this variable.
I first suggest an "empty" data.frame to ensure sites have the requisite week numbers populated:
mt <- expand.grid(field_site = unique(ret$field_site),
week = unique(ret$week))
The use of tidyr helps:
library(tidyr)
df %>%
mutate(fake = TRUE) %>%
# ensure all species are "represented" on each row
spread(animal, fake) %>%
# ensure all weeks are shown, even if no species
full_join(mt, by = c("field_site", "week")) %>%
# ensure the presence of a species persists at a site
arrange(week) %>%
group_by(field_site) %>%
mutate_if(is.logical, funs(cummax(!is.na(.)))) %>%
ungroup() %>%
# helps to contain variable number of species columns in one place
nest(-field_site, -week, .key = "species") %>%
group_by(field_site, week) %>%
# could also use purrr::map in place of sapply
mutate(n = sapply(species, sum)) %>%
ungroup() %>%
select(-species) %>%
arrange(field_site, week)
# # A tibble: 12 × 3
# field_site week n
# <fctr> <fctr> <int>
# 1 A 1 1
# 2 A 2 2
# 3 A 3 3
# 4 A 4 3
# 5 B 1 0
# 6 B 2 1
# 7 B 3 1
# 8 B 4 2
# 9 C 1 1
# 10 C 2 1
# 11 C 3 2
# 12 C 4 3 | unknown | |
d6256 | train | A couple of observations:
*
*The final boundary is not correct. Assuming you’ve created a boundary that starts with --, you should be appending \(boundary)-- as the final boundary. Right now the code is creating a new UUID (and omitting all of those extra dashes you added in the original boundary), so it won’t match the rest of the boundaries. You need a newLine sequence after that final boundary, too.
The absence of this final boundary could be preventing it from recognizing this part of the body, and thus the “File part might be missing” message.
*The boundary should not be a local variable. When preparing multipart requests, you have to specify the boundary in the header (and it has to be the same boundary here, not another UUID() instance).
request.setValue("multipart/form-data; boundary=\(boundary)", forHTTPHeaderField: "Content-Type")
Generally, I would have the caller create the boundary, use that when creating the request header, and then pass the boundary as a parameter to this method. See Upload image with parameters in Swift.
The absence of the same boundary value in the header and the body would prevent it from recognizing any of these parts of the body.
*You have defined your local boundary to include the newLine. Obviously, it shouldn’t be local var at all, but it must not include newline at the end, otherwise the attempt to append the last boundary of /(boundary)-- will fail.
Obviously, if you take this out of the boundary, make sure to insert the appropriate newlines as you build the body, where needed, though. Bottom line, make sure your body looks like the following (with the final --):
----------------------------F2152BF1-CE54-4E86-B8D0-931FA36F7C36
Content-Disposition: form-data; name="table_name"
incident
----------------------------F2152BF1-CE54-4E86-B8D0-931FA36F7C36
Content-Disposition: form-data; name="table_sys_id"
ba931ddadbf93b00f7bbdd0b5e96193c
----------------------------F2152BF1-CE54-4E86-B8D0-931FA36F7C36
Content-Disposition: form-data; name="file"; filename="[email protected]"
Content-Type: image/png
.....
----------------------------F2152BF1-CE54-4E86-B8D0-931FA36F7C36--
*In their curl example for /now/attachment/upload, they are using a field name of uploadFile, but you are using file. You may want to double check your field name and match the curl and postman examples.
curl "https://instance.service-now.com/api/now/attachment/upload" \
--request POST \
--header "Accept:application/json" \
--user "'admin':'admin'" \
--header "Content-Type:multipart/form-data" \
-F 'table_name=incident' \
-F 'table_sys_id=d71f7935c0a8016700802b64c67c11c6' \
-F '[email protected]'
If, after fixing the above, it still doesn’t work, I’d suggest you use Charles or Wireshark and compare a successful request vs the one you’re generating programmatically.
Needless to say, you might want to consider using Alamofire, which gets you out of the weeds of creating well-formed multipart requests. | unknown | |
d6257 | train | When you set counter = 1 you're declaring a new temporary counter equal to 1. The compiler does the work of determining the type. This temporary object is deduced to type int by default, and lives while the lambda is alive.
By setting mutable you can both modify counter and this
Aside: since it appears that you're inserting into a map/unordered map, you're probably better off with the following:
#include <algorithm> // For transform
#include <iterator> // For inserter
Constructor(const vector<string>& names) {
auto const example = [counter = 1](const string& item) mutable {
return {item, counter++};
};
std::transform(names.begin(), names.end(),
std::inserter(nameMapping, nameMapping.end()), example);
}
By moving the nameMapping call outside of the lambda, you don't have to confuse yourself with what is in scope and what is not.
Also, you can avoid unnecessary captures, and anything else that might confuse yourself or other readers in the future.
A:
But yet, I am able to create a local-variable in class-scope and modify it in a mutable lambda fn?
Can someone please help me understand whats going on.
It's exactly as you said.
Possibly confusing because there's no type given in this particular kind of declaration. Personally I think that was an awful design decision, but there we go.
Imagine it says auto counter = 1 instead; the auto is done for you. The variable then becomes a "member" of the lambda object, giving it state.
The code's not great, because the lambda isn't guaranteed to be applied to the container elements in order. A simple for loop would arguably be much simpler, clearer and predictable:
Constructor(const vector<string>& names)
{
int counter = 1;
for (const string& name : names)
nameMapping.emplace(name, counter++);
}
There's really no reason to complicate matters just for the sake of using "fancy" standard algorithms. | unknown | |
d6258 | train | I found http://owlgraphic.com/. It fits some of the features CodeTabs B+ has. | unknown | |
d6259 | train | To answer the multicolumn comobobox part of the question:
Use an array for AddItem (put it in a loop if you want)
Dim Arr(0 To 1) As String
Arr(0) = "Col 1"
Arr(1) = "Col 2"
cmb.AddItem Arr
and to retrieve data for the selected item:
cmb.List(cmb.ListIndex, 1)
you can also set up an enumeration for your column numbers like this:
Enum ColList
Loc=0
Weight=1
End Enum
then to retrieve data it would look like this (much more readable code)
cmb.List(cmb.ListIndex, ColList.Weight)
also, you dont have to use the word Fields... you can address your recordset like this:
rsDB1!Weight
A: split string into an array (zero based)
debug.print split("Loc: abc Weight : 1234"," ")(4) ' the second argument is the separator character
debug.print split("Loc: abc Weight : 1234")(4) ' space is the default separator
both print 1234 | unknown | |
d6260 | train | Udev monitors hardware and forwards events to dbus. You just need some dbus listener. A quick check using the dbus-monitor tool shows this in my system:
dbus-monitor --system
signal sender=:1.15 -> dest=(null destination) serial=144 path=/org/freedesktop/UDisks; interface=org.freedesktop.UDisks; member=DeviceChanged
object path "/org/freedesktop/UDisks/devices/sr0"
This is the DeviceChanged event from Udisks, and the device path is included.
So, in whatever programming language you want that supports dbus bindings you can listen for those (system bus) events.
A: Traditionally there has been HAL (Hardware Abstraction Layer) for this, but the web page says
HAL is in maintenance mode - no new
features are added. All future
development focuses on udisks, UPower
and other parts of the stack. See
Software/DeviceKit for more
information.
and the DeviceKit page lists
udisks, a D-Bus interface for dealing with storage devices
So udisks should probably be what you are asking for.
A: The best way I was able to find was Halevt. Halevt is apparently a higher level abstraction than using HAL directly. It uses an XML based configuration file that may or may not be to your liking. The configuration file properties documentation is somewhat lacking. A list of all the supported properties are listed here:
http://www.marcuscom.com/hal-spec/hal-spec.html
Also, the link to Halevt: http://www.nongnu.org/halevt/ | unknown | |
d6261 | train | Let's try this step by step
*
*Cast column timestamp to TimestampType format.
*Create a column of collect_list of mcc (say mcc_list) in the last 24 hours using window with range between interval 24 hours and current row frame.
*Create a column of set/unique collection of mc_list (say mcc_set) using array_distinct function. This column could also be created using collect_set over the same window in step 2.
*For each value of mcc_set, get its count in the mcc_list. Duplicated mcc value will have a count of > 1 so we can filter it. After that, the array will only contain the duplicated mcc, use size to count how many mcc are duplicated in the last 24 hours.
These steps put into a code could be like this
import pyspark.sql.functions as F
from pyspark.sql.types import *
df = (df
.withColumn('ts', F.col('timestamp').cast(TimestampType()))
.withColumn('mcc_list', F.expr("collect_list(mcc) over (order by ts range between interval 24 hours preceding and current row)"))
.withColumn('mcc_set', F.array_distinct('mcc_list'))
.withColumn('dups', F.expr("size(filter(transform(mcc_set, a -> size(filter(mcc_list, b -> b = a))), c -> c > 1))"))
# .drop(*['ts', 'mcc_list', 'mcc_set']))
)
df.show(truncate=False)
# +----+----------------------------+-------------------+------------------------------------+------------------------+----+
# |mcc |timestamp |ts |mcc_list |mcc_set |dups|
# +----+----------------------------+-------------------+------------------------------------+------------------------+----+
# |5812|2020-12-27T17:28:32.000+0000|2020-12-27 17:28:32|[5812] |[5812] |0 |
# |5812|2020-12-25T17:35:32.000+0000|2020-12-25 17:35:32|[5999, 7999, 5814, 5814, 5812, 5812]|[5999, 7999, 5814, 5812]|2 |
# |5812|2020-12-25T13:04:05.000+0000|2020-12-25 13:04:05|[5999, 7999, 5814, 5814, 5812] |[5999, 7999, 5814, 5812]|1 |
# |5814|2020-12-25T12:23:05.000+0000|2020-12-25 12:23:05|[5999, 7999, 5814, 5814] |[5999, 7999, 5814] |1 |
# |5814|2020-12-25T11:52:57.000+0000|2020-12-25 11:52:57|[5999, 7999, 5814] |[5999, 7999, 5814] |0 |
# |7999|2020-12-25T09:23:01.000+0000|2020-12-25 09:23:01|[5814, 5999, 7999] |[5814, 5999, 7999] |0 |
# |5999|2020-12-25T07:29:52.000+0000|2020-12-25 07:29:52|[5999, 5814, 5999] |[5999, 5814] |1 |
# |5814|2020-12-24T11:00:57.000+0000|2020-12-24 11:00:57|[5999, 5814] |[5999, 5814] |0 |
# |5999|2020-12-24T07:29:52.000+0000|2020-12-24 07:29:52|[5999] |[5999] |0 |
# +----+----------------------------+-------------------+------------------------------------+------------------------+----+
You can drop unwanted columns afterwards. | unknown | |
d6262 | train | You can do something like this-
from functools import reduce
nested_list = [[], ['a','b',5],['c', 'd', 2], []]
merged_list = reduce((lambda x, y:x+y), nested_list)
This solution applies for single level down type nested lists([[a,b,c],[x,y,z]]).
If you can provide what type of list you want to be merged I can provide a solution for that. For now, I have assumed it's just a single level down nested list.
A: Assuming the desired output will be looked like this:
col1 col2 col3
NaN ['a' 'b' 5] NaN
NaN ['c' 'd' 2] NaN
And currently you are having the following list at your hands:
>>>a_list
[[], [['a' 'b' 5], ['c' 'd' 2]], []]
Then you can do the following to create the DataFrame:
>>>import pandas as pd
>>>import numpy as np
>>>df = pd.DataFrame(columns=['col1','col2','col3'])
>>>a_list = [[], [['a' 'b' 5], ['c' 'd' 2]], []]
>>>for i in range(len(df.columns.tolist())):
... try:
... df[df.columns[i]] = a_list[i]
... except:
... df[df.columns[i]] = np.nan
>>>df
col1 col2 col3
0 NaN [a, b, 5] NaN
1 NaN [c, d, 2] NaN | unknown | |
d6263 | train | After some testing, I found the problem. turns out I forgot about a function I made that was called every time I saved a media file. the function returned the duration of the file and used NAudio.Wave.WaveFileReader and NAudio.Wave.Mp3FileReader methods which I forgot to close after I called them
I fixed these issues by putting those methods inside of a using statement
Here is the working function:
public static int GetMediaFileDuration(string filePath)
{
filePath = HostingEnvironment.MapPath("~") + filePath;
if (Path.GetExtension(filePath) == ".wav")
using (WaveFileReader reader = new WaveFileReader(filePath))
return Convert.ToInt32(reader.TotalTime.TotalSeconds);
else if(Path.GetExtension(filePath) == ".mp3")
using (Mp3FileReader reader = new Mp3FileReader(filePath))
return Convert.ToInt32(reader.TotalTime.TotalSeconds);
return 0;
}
The moral of the story is, to check if you are opening the file anywhere else in your project
A: I think that the problem is not about streamReader in here.
When you run the program, your program runs in a specific folder. Basically, That folder is locked by your program. In that case, when you close the program, it will be unlocked.
To fix the issue, I would suggest to write/delete/update to different folder.
Another solution could be to check file readOnly attribute and change this attribute which explained in here
Last solution could be using different users. What I mean is that, if you create a file with different user which not admin, you can delete with Admin user. However, I would definitely not go with this solution cuz it is too tricky to manage different users if you are not advance windows user. | unknown | |
d6264 | train | This is not your first database connection it's easy, but you'll have to execute raw statements because database creation is no available as connection methods:
DB::statement(DB::raw('CREATE DATABASE <name>'));
To do that you can use a secondary connection:
<?php
return array(
'default' => 'mysql',
'connections' => array(
'mysql' => array(
'driver' => 'mysql',
'host' => 'host1',
'database' => 'database1',
'username' => 'user1',
'password' => 'pass1'
),
'store' => array(
'driver' => 'mysql',
'host' => 'host2',
'database' => 'database2',
'username' => 'user2',
'password' => 'pass2'
),
),
);
Then you can, during application bootstrap, change the database of the secondary connection:
DB::connection('store')->setDatabaseName($store);
or
Config::set('database.connections.store', $store);
And use the secondary connection in your queries:
$user = User::on('store')->find(1);
or
DB::connection('store')->select(...); | unknown | |
d6265 | train | You can't really move a row any higher than the row above it, so I think your best bet would be to remove margin/padding from the <td>s inside that <tr>. Example:
tr.small-item-block td {
margin-top: 0;
padding-top: 0;
}
A: You can't move a tr, but you can set the td's to position: relative, and then set a negative top property, like:
tr.small-item-block td {
position: relative;
top: -10px;
}
Here's your fiddle updated: http://jsfiddle.net/L4gLM/2/ | unknown | |
d6266 | train | You use an internal subprogram, see below. Note internal subprograms themselves can not contain internal subprograms.
ian@eris:~/work/stack$ cat contained.f90
Module func
Implicit None
Contains
Real Function f(x,y)
! Interface explicit so don't need to declare g
Real x,y
f=x*g(y)
Contains
Real Function g(r)
Real r
g=r
End Function g
End Function f
End Module func
Program testit
Use func
Implicit None
Write(*,*) f(1.0,1.0)
End Program testit
ian@eris:~/work/stack$ gfortran-8 -std=f2008 -Wall -Wextra -fcheck=all -O -g contained.f90
ian@eris:~/work/stack$ ./a.out
1.00000000
ian@eris:~/work/stack$ | unknown | |
d6267 | train | Solved it!
This is the working code:
function getHeuristic(currentXY, targetXY: array of word): word;
begin
getHeuristic:=abs(currentXY[0]-targetXY[0])+abs(currentXY[1]-targetXY[1]);
end;
function getPath(startingNodeXY, targetNodeXY: array of word; grid: wordArray3; out pathToControlledCharPtr: word; worldObjIndex: word): wordArray2;
var
openList, closedList: array of array of word; { x/y/g/h/parent x/parent y, total }
qXYGH: array[0..5] of word; { x/y/g/h/parent x/parent y }
gridXCnt, gridYCnt: longInt;
maxF, q, openListCnt, closedListCnt, parentClosedListCnt, getPathCnt, adjSquNewGScore: word;
openListIndexCnt, closedListIndexCnt, qIndexCnt, successorIndexCnt: byte;
getMaxF, successorOnClosedList, successorOnOpenList, pathFound: boolean;
begin
{ Add the starting square (or node) to the open list. }
setLength(openList, 6, length(openList)+1);
openList[0, 0]:=startingNodeXY[0];
openList[1, 0]:=startingNodeXY[1];
setLength(closedList, 6, 0);
{ Repeat the following: }
{ D) Stop when you: }
{ Fail to find the target square, and the open list is empty. In this case, there is no path. }
pathFound:=false;
{ writeLn('h1'); }
while length(openList[0])>0 do
begin
{ A) Look for the lowest F cost square on the open list. We refer to this as the current square. }
maxF:=0;
q:=0;
getMaxF:=true;
for openListCnt:=0 to length(openList[0])-1 do
begin
//writeLn(formatVal('open list xy {} {}, cnt {}, list max index {}', [openList[0, openListCnt], openList[1, openListCnt], openListCnt, length(openList[0])-1]));
{ readLnPromptX; }
if (getMaxF=true) or (maxF>openList[2, openListCnt]+openList[3, openListCnt]) then
begin
getMaxF:=false;
maxF:=openList[2, openListCnt]+openList[3, openListCnt];
q:=openListCnt;
end;
end;
for qIndexCnt:=0 to length(qXYGH)-1 do
qXYGH[qIndexCnt]:=openList[qIndexCnt, q];
{ B). Switch it to the closed list. }
setLength(closedList, length(closedList), length(closedList[0])+1);
for closedListIndexCnt:=0 to length(closedList)-1 do
closedList[closedListIndexCnt, length(closedList[0])-1]:=qXYGH[closedListIndexCnt];
{ Remove current square from open list }
if q<length(openList[0])-1 then
begin
for openListCnt:=q to length(openList[0])-2 do
begin
for openListIndexCnt:=0 to length(openList)-1 do
openList[openListIndexCnt, openListCnt]:=openList[openListIndexCnt, openListCnt+1];
end;
end;
setLength(openList, length(openList), length(openList[0])-1);
//writeLn(formatVal('q[x] {}, q[y] {}, startingNodeXY x {}, startingNodeXY y {}, targetNodeXY x {}, targetNodeXY y {}', [qXYGH[0], qXYGH[1], startingNodeXY[0], startingNodeXY[1], targetNodeXY[0], targetNodeXY[1]]));
{ readLnPromptX; }
{ D) Stop when you: }
{ Add the target square to the closed list, in which case the path has been found, or }
if (qXYGH[0]=targetNodeXY[0]) and (qXYGH[1]=targetNodeXY[1]) then
begin
pathFound:=true;
break;
end;
{ C) For each of the 8 squares adjacent to this current square … }
for gridXCnt:=qXYGH[0]-1 to qXYGH[0]+1 do
begin
for gridYCnt:=qXYGH[1]-1 to qXYGH[1]+1 do
begin
{ Adjacent square cannot be the current square }
if (gridXCnt<>qXYGH[0]) or (gridYCnt<>qXYGH[1]) then
begin
//writeLn(formatVal('gridXCnt {} gridYCnt {} qXYGH[0] {} qXYGH[1] {}', [gridXCnt, gridYCnt, qXYGH[0], qXYGH[1]]));
{ readLnPromptX; }
{ Check if successor is on closed list }
successorOnClosedList:=false;
if length(closedList[0])>0 then
begin
for closedListCnt:=0 to length(closedList[0])-1 do
begin
if (closedList[0, closedListCnt]=gridXCnt) and (closedList[1, closedListCnt]=gridYCnt) then
begin
successorOnClosedList:=true;
break;
end;
end;
end;
{ If it is not walkable or if it is on the closed list, ignore it. Otherwise do the following. }
if (gridXCnt>=0) and (gridXCnt<=length(grid[3])-1) and (gridYCnt>=0) and (gridYCnt<=length(grid[3, 0])-1) and (grid[3, gridXCnt, gridYCnt]=0) and (successorOnClosedList=false) then
begin
{ If it isn’t on the open list, add it to the open list. Make the current square the parent of this square. Record the F, G, and H costs of the square. }
successorOnOpenList:=false;
if length(openList[0])>0 then
begin
for openListCnt:=0 to length(openList[0])-1 do
begin
if (openList[0, openListCnt]=gridXCnt) and (openList[1, openListCnt]=gridYCnt) then
begin
successorOnOpenList:=true;
break;
end;
end;
end;
if successorOnOpenList=false then
begin
setLength(openList, length(openList), length(openList[0])+1);
openList[0, length(openList[0])-1]:=gridXCnt;
openList[1, length(openList[0])-1]:=gridYCnt;
openList[4, length(openList[0])-1]:=qXYGH[0];
openList[5, length(openList[0])-1]:=qXYGH[1];
if (openList[0, length(openList[0])-1]=qXYGH[0]) or (openList[1, length(openList[0])-1]=qXYGH[1]) then
begin
openList[2, length(openList[0])-1]:=openList[2, length(openList[0])-1]+10;
end
else
begin
openList[2, length(openList[0])-1]:=openList[2, length(openList[0])-1]+14;
end;
openList[3, length(openList[0])-1]:=getHeuristic([openList[0, length(openList[0])-1], openList[1, length(openList[0])-1]], [targetNodeXY[0], targetNodeXY[1]]);
end
else
begin
{ If it is on the open list already, check to see if this path to that square is better, using G cost as the measure (check to see if the G score for the adjacent square is lower if we use the current square to get there (adjacent square
new G score = current square G score + 10 (if adjacent squre is vertical or horizontal to current square) or +14 (if it is diagonal); if result is lower than adjacent square current G score then this path is better). A lower G cost means that
this is a better path. If so, change the parent of the square to the current square, and recalculate the G and F scores of the square. If you are keeping your open list sorted by F score, you may need to resort the list to account for the
change. }
adjSquNewGScore:=openList[2, openListCnt];
if (openList[0, openListCnt]=qXYGH[0]) or (openList[1, openListCnt]=qXYGH[1]) then
begin
adjSquNewGScore:=adjSquNewGScore+10;
end
else
begin
adjSquNewGScore:=adjSquNewGScore+14;
end;
if adjSquNewGScore<openList[2, openListCnt] then
begin
openList[4, openListCnt]:=qXYGH[0];
openList[5, openListCnt]:=qXYGH[1];
openList[2, openListCnt]:=adjSquNewGScore;
end;
end;
end;
end;
end;
end;
end;
{ writeLn('h2'); }
{ writeLn(pathFound); }
{ readLnHalt; }
if pathFound=true then
begin
{ Save the path. Working backwards from the target square, go from each square to its parent square until you reach the starting square. That is your path. }
closedListCnt:=length(closedList[0])-1;
setLength(getPath, 2, 0);
{ While starting node has not been added to path }
while (length(getPath[0])=0) or (getPath[0, length(getPath[0])-1]<>startingNodeXY[0]) or (getPath[1, length(getPath[0])-1]<>startingNodeXY[1]) do
begin
{ Add node from closed list to path }
setLength(getPath, 2, length(getPath[0])+1);
getPath[0, length(getPath[0])-1]:=closedList[0, closedListCnt];
getPath[1, length(getPath[0])-1]:=closedList[1, closedListCnt];
//writeLn(formatVal('path found {} {}, start {} {}, target {} {}', [getPath[0, length(getPath[0])-1], getPath[1, length(getPath[0])-1], startingNodeXY[0], startingNodeXY[1], targetNodeXY[0], targetNodeXY[1]]));
{ readLnPromptX; }
{ Find next node on closed list with coord matching parent coord of current closed list node }
for parentClosedListCnt:=length(closedList[0])-1 downto 0 do
if (closedList[0, parentClosedListCnt]=closedList[4, closedListCnt]) and (closedList[1, parentClosedListCnt]=closedList[5, closedListCnt]) then break;
closedListCnt:=parentClosedListCnt;
{ if (closedList[0, closedListCnt]=0) and (closedList[1, closedListCnt]=0) then break; }
end;
pathToControlledCharPtr:=length(getPath[0])-1;
end;
end; | unknown | |
d6268 | train | Change the command type to Procedure | unknown | |
d6269 | train | Ok, I did finally get this to work.
First, these two resources are amazing for anyone wanting to delve into this mess:
http://madduck.net/docs/extending-xkb/
&
http://www.charvolant.org/~doug/xkb/html/index.html
For anyone specifically trying to do this switchover, this is what I did:
1) create a file in /usr/share/X11/xkb/symbols for your new mapping
2) put this in it:
// Control is SWAPPED with Win-keys
partial modifier_keys
xkb_symbols "cmd_n_ctrl" {
key <LWIN> { [ Control_L ] };
key <RWIN> { [ Control_R ] };
key <LCTL> { [ Super_L ] };
modifier_map Control { <LWIN>, <RWIN> };
modifier_map Mod4 { <LCTL> };
};
3: edit evdev in /usr/share/X11/xkb/rules to include:
altwin2:cmd_n_ctrl = +altwin2(cmd_n_ctrl)
(under the option = symbols section)
4: add your new option to evdev.lst (same dir):
altwin2:cmd_n_ctrl
(under the option section)
5: now edit your 01-Keyboard conf file to include the new option that you've created. Mine looks like this:
Section "InputClass"
Identifier "keyboard-layout"
Driver "evdev"
MatchIsKeyboard "yes"
Option "XkbLayout" "us, ru, ca, fr"
Option "XkbOptions" "altwin2:cmd_n_ctrl"
EndSection
6: reboot and you should be good to go.
The above resources are way better at explaining all of this, or any snags you might run into. There is probably a much better way to do this (probably not altering the contents of /usr/share), but so far, this is what got me up and running.
Hope that helps someone else stuck in this spot! | unknown | |
d6270 | train | there might be more memory available than what the CPU is currently able to address. The same limit exists for an userland process that is able to address only a subset of the memory according to its mapping table. Look at PAE extensions for example, you can have up to 64GB of RAM but the kernel or any process can access only up to 4GB of memory. | unknown | |
d6271 | train | Use a ComboBox instead of a TextBox. The following example will autocomplete, matching any piece of the text, not just the starting letters.
This should be a complete form, just add your own data source, and data source column names. :-)
using System;
using System.Data;
using System.Windows.Forms;
public partial class frmTestAutocomplete : Form
{
private DataTable maoCompleteList;
private const string MC_DISPLAY_COL = "name";
private const string MC_ID_COL = "id";
public frmTestAutocomplete()
{
InitializeComponent();
}
private void frmTestAutocomplete_Load(object sender, EventArgs e)
{
maoCompleteList = GetDataTableFromDatabase();
maoCompleteList.CaseSensitive = false; //turn off case sensitivity for searching
testCombo.DisplayMember = MC_DISPLAY_COL;
testCombo.ValueMember = MC_ID_COL;
testCombo.DataSource = maoCompleteList;
testCombo.SelectedIndexChanged += testCombo_SelectedIndexChanged;
testCombo.KeyUp += testCombo_KeyUp;
}
private void testCombo_KeyUp(object sender, KeyEventArgs e)
{
//use keyUp event, as text changed traps too many other evengts.
ComboBox oBox = (ComboBox)sender;
string sBoxText = oBox.Text;
DataRow[] oFilteredRows = maoCompleteList.Select(MC_DISPLAY_COL + " Like '%" + sBoxText + "%'");
DataTable oFilteredDT = oFilteredRows.Length > 0
? oFilteredRows.CopyToDataTable()
: maoCompleteList;
//NOW THAT WE HAVE OUR FILTERED LIST, WE NEED TO RE-BIND IT WIHOUT CHANGING THE TEXT IN THE ComboBox.
//1).UNREGISTER THE SELECTED EVENT BEFORE RE-BINDING, b/c IT TRIGGERS ON BIND.
oBox.SelectedIndexChanged -= testCombo_SelectedIndexChanged; //don't select on typing.
oBox.DataSource = oFilteredDT; //2).rebind to filtered list.
oBox.SelectedIndexChanged += testCombo_SelectedIndexChanged;
//3).show the user the new filtered list.
oBox.DroppedDown = true; //this will overwrite the text in the ComboBox, so 4&5 put it back.
//4).binding data source erases text, so now we need to put the user's text back,
oBox.Text = sBoxText;
oBox.SelectionStart = sBoxText.Length; //5). need to put the user's cursor back where it was.
}
private void testCombo_SelectedIndexChanged(object sender, EventArgs e)
{
ComboBox oBox = (ComboBox)sender;
if (oBox.SelectedValue != null)
{
MessageBox.Show(string.Format(@"Item #{0} was selected.", oBox.SelectedValue));
}
}
}
//=====================================================================================================
// code from frmTestAutocomplete.Designer.cs
//=====================================================================================================
partial class frmTestAutocomplete
{
/// <summary>
/// Required designer variable.
/// </summary>
private readonly System.ComponentModel.IContainer components = null;
/// <summary>
/// Clean up any resources being used.
/// </summary>
/// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param>
protected override void Dispose(bool disposing)
{
if (disposing && (components != null))
{
components.Dispose();
}
base.Dispose(disposing);
}
#region Windows Form Designer generated code
/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent()
{
this.testCombo = new System.Windows.Forms.ComboBox();
this.SuspendLayout();
//
// testCombo
//
this.testCombo.FormattingEnabled = true;
this.testCombo.Location = new System.Drawing.Point(27, 51);
this.testCombo.Name = "testCombo";
this.testCombo.Size = new System.Drawing.Size(224, 21);
this.testCombo.TabIndex = 0;
//
// frmTestAutocomplete
//
this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F);
this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
this.ClientSize = new System.Drawing.Size(292, 273);
this.Controls.Add(this.testCombo);
this.Name = "frmTestAutocomplete";
this.Text = "frmTestAutocomplete";
this.Load += new System.EventHandler(this.frmTestAutocomplete_Load);
this.ResumeLayout(false);
}
#endregion
private System.Windows.Forms.ComboBox testCombo;
}
A: Just in case @leniel's link goes down, here's some code that does the trick:
AutoCompleteStringCollection allowedTypes = new AutoCompleteStringCollection();
allowedTypes.AddRange(yourArrayOfSuggestions);
txtType.AutoCompleteCustomSource = allowedTypes;
txtType.AutoCompleteMode = AutoCompleteMode.Suggest;
txtType.AutoCompleteSource = AutoCompleteSource.CustomSource;
A: The answer link by Leniel was in vb.net, thanks Joel for your entry. Supplying my code to make it more explicit:
private void InitializeTextBox()
{
AutoCompleteStringCollection allowedStatorTypes = new AutoCompleteStringCollection();
var allstatortypes = StatorTypeDAL.LoadList<List<StatorType>>().OrderBy(x => x.Name).Select(x => x.Name).Distinct().ToList();
if (allstatortypes != null && allstatortypes.Count > 0)
{
foreach (string item in allstatortypes)
{
allowedStatorTypes.Add(item);
}
}
txtStatorTypes.AutoCompleteMode = AutoCompleteMode.Suggest;
txtStatorTypes.AutoCompleteSource = AutoCompleteSource.CustomSource;
txtStatorTypes.AutoCompleteCustomSource = allowedStatorTypes;
}
A: Use combo box, sets its datasource or give hard coded entries but set the following properties:
AutoCompleteMode = Suggest;
AutoCompleteSource = ListItems;
A: You want to set the TextBox.AutoCompleteSource to CustomSource and then add all of your strings to its AutoCompleteCustomSource property, which is a StringCollection. Then you should be good to go.
A: I want to add that the standard autocomplete for TextBox does only work from the beginning of your strings, so if you hit N only strings starting with N will be found. If you want to have something better, you have to use some different control or implement the behavior for yourself (i.e. react on TextChanged Event with some timer to delay execution, than filter your tokenlist searching with IndexOf(inputString) and then set your AutoCompleteSource to the filtered list. | unknown | |
d6272 | train | You want to reset not rebase. Rebasing is the act of replaying commits. Resetting is making the current commit some other one.
you will need to save any work that you may have in your work directory first:
git stash -u
then you will make you current commit the one you want with
git reset --hard 8ec2027
Optionally, after you can save where you were before doing this with:
git branch -b temp HEAD@{1}
see reflog documentation to see how this works.
A: Probably this could also work out for you
*
*Create a new branch at 2503013 (this saves the changes after 8ec202)
*git reset --hard 8ec2027 | unknown | |
d6273 | train | First: Do you really want to offer a 100% uptime SLA for your customers, when Azure itself doesn't offer 100% in its SLA's?
That said: Traffic Manager only load-balances your compute, not your storage. So if you're trying to increase uptime by having a set of backup compute nodes running in another data center, you need to think about data access speed and cost:
*
*With round robin, you'll now have distributed traffic across multiple data centers, guaranteed, and constantly. And if your data is in a single data center (which is a good idea to have data in a single System of Record, unless you have replication logic all taken care of), some of your users are going to see increased latency as the nodes separated from your data are going to be requesting data across many miles (potentially between continents). Plus, data egress has a $$$ cost to it.
*With performance, your users are directed toward the data center which offers them the lowest latency. Again, this now means traffic across multiple data centers, with the same issues as round robin.
*With failover, you now have all traffic going to one data center, with another designated as your failover data center (so it's for High Availability). In the event you have an outage in the primary data center, you'd now have a failover data center to rely on. This may help justify the added latency and cost, as you'd only experience this latency+cost when your primary app location becomes unavailable for some reason.
So: If you're going for the high availability route, to help approach the 100% availability mark, I'm guessing you'd be best off with the failover model.
A: Traffic manager comes into picture only when your application is deployed across multiple cloud services within same data center or in different data centers. If your application is hosted in a single cloud service (with multiple instances of course) , then the instances are load balanced using Round Robin pattern. This is the default load balancing pattern and comes to you without any extra charge.
You can read more about traffic manager here: https://azure.microsoft.com/en-us/documentation/articles/traffic-manager-overview/
A: As per my guess there can not be comparison which is best load balancing method of Azure Traffic manager. All of them have unique advantages and vary depending on the requirement of application. Most common scenario is to use performance load balancing option with azure traffic manager. But as Gaurav said, you will have to have your cloud service application hosted on more than one cloud services. If you wish to implement performance load balancing then here is the link to get you started - http://sanganakauthority.blogspot.com/2014/06/performance-load-balancing-using-azure.html | unknown | |
d6274 | train | Yes it does. If you change a new[]-ed pointer value and then call delete[] operator on it you are invoking undefined behavior:
char* someArray = new char[20];
someArray++;
delete[] someArray; // undefined behavior
Instead store the original value in a different pointer and call delete[] on it:
char* someArray = new char[20];
char* originalPointer = someArray;
someArray++; // changes the value but the originalPointer value remains the same
delete[] originalPointer; // OK
A: You might be interested to know what new and delete really do under the covers (some licence taken, ignores exceptions and alignment):
template<class Thing>
Thing* new_array_of_things(std::size_t N)
{
std::size_t size = (sizeof(Thing) * N) + sizeof(std::size_t);
void* p = std::malloc(size);
auto store_p = reinterpret_cast<std::size_t*>(p);
*store_p = N;
auto first = reinterpret_cast<Thing*>(store_p + 1);
auto last = first + N;
for (auto i = first ; i != last; ++i)
{
new (i) Thing ();
}
return first;
}
template<class T>
void delete_array_of_things(Thing* first)
{
if (first)
{
auto store_p = reinterpret_cast<std::size_t*>(first) - 1;
auto N = *store_p;
while (N--)
{
(first + N)->~Thing();
}
std::free(store_p);
}
}
Summary:
The pointer you are given is not a pointer to the beginning of the allocated memory. The size of the array is stored just before the memory that provides storage for the array of objects (glossing over some details).
delete[] understands this and expects you to offer the pointer that was returned by new[], or a copy of it.
A: General rule is that you can delete only pointers you got from new. In an non-array version you are allowed to pass a pointer to base class subobject created with new (granted the base class has virtual destructor). In case of array version, it must be the same pointer.
From cppreference
For the second (array) form, expression must be a null pointer value or a pointer value previously obtained by an array form of new-expression. If expression is anything else, including if it's a pointer obtained by the non-array form of new-expression, the behavior is undefined. | unknown | |
d6275 | train | You should use a completion-handler for your kind of problem:
//Run the action
iapButton.runAction(iapButtonReturn,
//After action is done, just call the completion-handler.
completion: {
firePosition.x = 320
firePosition.y = 280
}
)
Or you could use a SKAction.sequence and add your actions inside a SKAction.block:
var block = SKAction.runBlock({
fire.runAction(fireButtonReturn)
iapButton.runAction(iapButtonReturn)
aboutButton.runAction(abtButtonReturn)
})
var finish = SKAction.runBlock({
firePosition.x = 320
firePosition.y = 280
})
var sequence = SKAction.sequence([block, SKAction.waitForDuration(yourWaitDuration), finish])
self.runAction(sequence)
A: you should use a completion handler:
fire.runAction(fireButtonReturn,
completion: {
println("has no actions")
firePosition.x = 320
firePosition.y = 280
}
)
the problem with your solution is, that the action is initiated with the runAction call but then runs in the background while the main thread continues the execution (and therefore reaches it before the action is finished).
A: extension SKNode {
func actionForKeyIsRunning(key: String) -> Bool {
if self.actionForKey(key) != nil {
return true
} else {
return false
}
}
}
You can use it :
if myShip.actionForKeyIsRunning("swipedLeft") {
print("swipe in action..")
} | unknown | |
d6276 | train | Sending data server-side to Google Analytics is entirely possible (and admittedly it is pretty daunting if you've not done it before).
The two best resources to use are the Google Analytics Measurement Protocol documentation and the Google Analytics Hit Builder. Use the parameter guide to prep the custom metric data specifically.
Here is my actual snippet that I use for all of my PHP projects (first ~150 lines or so). I'm certain there are better ways to do it, but at the very least it might help you figure out some of the complexities.
It's a lot of info to soak in, but I hope that gets you headed in the right direction! | unknown | |
d6277 | train | Firefox doesn't support MP3. It won't show the fallback message because it supports the audio tag.
https://developer.mozilla.org/En/Media_formats_supported_by_the_audio_and_video_elements#MPEG_H.264_(AAC_or_MP3)
A: You can't play MP3 files with such a code in Firefox.
See https://developer.mozilla.org/En/Media_formats_supported_by_the_audio_and_video_elements | unknown | |
d6278 | train | Your media_serve_protected function is returning a Forbidden response if the url does not start with media/<id>. But your url is in the form media/root/<id>. | unknown | |
d6279 | train | In SharePoint Server Enterprise you can use Performance Point functionality. MSDN best practices. It's not straightforward but possible. Otherwise you can use some 3rd party component. | unknown | |
d6280 | train | all files / directores should be owned by user, to fix it run:
rvm fix-permissions
to avoid this problem in future just try to avoid using sudo or rvmsudo it should be never required (rvm uses sudo internally when it is required). | unknown | |
d6281 | train | You can use urlencode on your data.recherche. But there also more natural way to do this in twig | unknown | |
d6282 | train | I "solved" it myself. One misconception that i had was that every insert transaction is confirmed in the MongoDB console while it actually only confirms the first one or if there is some time between the commands. To check if the insert process really works one needs to run the script for some time and wait for MongoDB to dump it in the local file (approx. 30-60s).
In addition, the insert processes were too quick after each other and MongoDB appears to not handle this correctly under Win10 x64. I changed from the Array-Buffer to the internal buffer (see comments) and only continued with the process after the previous data was inserted.
This is the simplified resulting code
db.collection('seedlist', function(err, collection) {
syncLoop(0,0, collection);
//...
});
function syncLoop(q, w, collection) {
batch = collection.initializeUnorderedBulkOp({useLegacyOps: true});
for(var e=0;e<words.length;e++) {
batch.insert({a:a, b:b});
}
batch.execute(function(err, result) {
if(err) throw err;
//...
return setTimeout(function() {
syncLoop(qNew,wNew,collection);
}, 0); // Timer to prevent Memory leak
});
} | unknown | |
d6283 | train | This code is certainly not perfect, but it basically compares the Strings and saves how many characters matched the corresponding character in the other String. This of course leads to it not really working that well with different sized Strings, as it will treat everything after the missing letter as false (unless it matches the character by chance). But maybe it helps regardless:
String match = "example";
String input = "exnaplr";
int smaller;
if (match.length() < input.length())
smaller = match.length(); else smaller = input.length();
int correct = 0;
for (int i = 0; i < smaller; i++) {
if (match.charAt(i) == input.charAt(i)) correct++;
}
int percentage = (int) ((double) correct / match.length() * 100);
System.out.println("Input was " + percentage + "% correct!"); | unknown | |
d6284 | train | sbt "testOnly HelloWorldExercise" | unknown | |
d6285 | train | Try to use this,
var online = navigator.onLine;
and now you can do like this,
if(online){
alert('Connection is good');
}
else{
alert('There is no internet connection');
}
UPDATE:
Try to put the alert here,
if(online){
setTimeout('updateSection(' + sect + ')', 10000);
//alert('updateSection: ' + sect);
var ajax = new sack();
ajax.requestFile = 'ajax/getMessages.php?section=1';
ajax.method = 'post';
/*ajax.onError = whenError;*/
ajax.onCompletion = whenComplete;
ajax.runAJAX();
}
else{
alert('There is no internet connection');
}
A: If I'm understanding you correctly you could do something like this:
every time onerror event happens on an ajax request increment a counter. After a set limit of fails in a row / fails in an amount of time you change the length of the time-out.
var timeoutLength = 10000
setTimeout('updateSection(' + sect + ')', timeoutLength);
changing the timeoutLength once the ajax requests are failing, IE there is not internet connection.
EDIT
var errorCount = 0;
ajax.onError = whenError;
function whenError() {
errorCount++
if(errorCount < 5) {
timeoutLength = 3600000
}
}
function whenComplete() {
errorCount = 0
...
}
This requires 5 errors in a row to assume that the internet is down. You should probably play around with that. But this should show you the general idea. | unknown | |
d6286 | train | The KFP SDK has two major versions: v1.8.x and v2.x.x (in pre-release at the time of writing this).
KFP SDK v2.x.x compiles pipelines and components to IR YAML [example], a platform neutral pipeline representation format. It can be run on the KFP open source backend or on other platforms, such as Google Cloud Vertex AI Pipelines.
KFP SDK v1.8.x, by default, compiles pipelines and components to Argo Workflow YAML. Argo Workflow YAML is executed on Kubernetes and is not platform neutral.
KFP SDK v1.8.x provides two ways to author pipelines using v2 Python syntax:
KFP SDK v2-compatible mode is a feature in KFP SDK v1.8.x which permits using v2 Python authoring syntax within KFP SDK v1 but compiles to Argo Workflow YAML. v2-compatible mode is deprecated and should not be used.
The KFP SDK v2 namespace in KFP SDK v1.8.x (from kfp.v2 import dsl, compiler) permits using v2 Python authoring syntax within KFP SDK v1 and compiles to IR YAML [usage example]. While this mode is not deprecated, users should prefer authoring IR YAML via the pre-released KFP SDK v2.x.x. | unknown | |
d6287 | train | Use some kind of flag to determine if the image should be drawn or not and simply change it's state as needed...
@Override
protected void paintComponent(Graphics g) {
super.paintComponent(g);
Graphics2D g2 = (Graphics2D) g;
if (draw) {
g2.drawImage(Menu, 0, 0, getWidth(), getHeight(), null);
}
}
Then change the state of the flag when you need to...
Timer timer = new Timer(5, new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
draw = false;
repaint();
}
});
Note: It's not recommended to to override paint, paint is at the top of the paint change and can easily break how the paint process works, causing no end of issues. Instead, it's normally recommended to use paintComponent instead. See Performing Custom Painting for more details
Also note: javax.swing.Timer expects the delay in milliseconds...5 is kind of fast... | unknown | |
d6288 | train | Try this regex
/^[a-z]+@{1}[a-z]{2,}$/g
Your string must start with a-z (^) and end in a-z($)
^ and $ are for beginning and end
A: Your Regular Expression looks fine. I tested it on few test cases.
*
*Symbol ^ Matches the beginning of the string, or the beginning of a line if the multiline flag (m) is enabled. This matches a position, not a character.
*Symbol $ Matches the end of the string, or the end of a line if the multiline flag (m) is enabled. This matches a position, not a character.
For example, if you have to match pattern of exactly length 3, Then you can use ^ and $ to denote beginning and end of the pattern.
A: An example of email regex is like this
/^[+a-zA-Z0-9._-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,4}$/m
for your question,
Please tell me about beginning (^) and end ($) symbols used in regular expression...
In my example regex above, both symbols are best used if your using the multiline (m) modifier in you regex. (^) is used to identify the beginning of the line and ($) is used to identify the end of the line. It does affect your original problem but I would suggest you use this symbols in your regular expression.
Hope me help well. | unknown | |
d6289 | train | Store each request, then use $.when to create a single deferred object to listen for them all to be complete.
var req1 = $.ajax({...});
var req2 = $.ajax({...});
var req3 = $.ajax({...});
$.when( req1, req2, req3 ).done(function(){
console.log("all done")
}); | unknown | |
d6290 | train | your query produces cartesian product because you have not supplied the relationship between the two tables: bookmarks and users,
SELECT url
FROM bookmarks
INNER JOIN users
ON bookmarks.COLNAME = users.COLNAME
WHERE bookmarks.user_id = '$session->user_id'
where COLNAME is the column that defines how the table are related with each other, or how the tables should be linked.
To further gain more knowledge about joins, kindly visit the link below:
*
*Visual Representation of SQL Joins
As a sidenote, the query is vulnerable with SQL Injection if the value(s) of the variables came from the outside. Please take a look at the article below to learn how to prevent from it. By using PreparedStatements you can get rid of using single quotes around values.
*
*How to prevent SQL injection in PHP?
A: As JW pointed out, you are producing a Cartesian Product -- you aren't joining bookmarks against users. That's why you're duplicating rows.
With that said, you don't need to join users at all in your above query:
"SELECT url FROM bookmarks WHERE bookmarks.user_id = {$session->user_id}"
Good luck. | unknown | |
d6291 | train | The default configuration provider will look at the app.config or web.config in your case. However you can use the XmlConfigurator class to load configurations from a Stream
http://logging.apache.org/log4net/release/sdk/log4net.Config.XmlConfigurator.Configure_overload_7.html
In your role configuration you can specify a blob location then use a blob client object from the Azure storage SDK and load the xml from a single blob location.
Log4Net configuration: http://logging.apache.org/log4net/release/manual/configuration.html
This is similar to the Azure diagnostics configuration which uses an xml blob.
The caveat to this is that you need to do some more implementation like regular queuing for updates to the file if you want to do live changes to your logging. | unknown | |
d6292 | train | Since you mentioned the collaboration cache folder, I suppose your Revit model is the Revit Cloud Worksharing model (a.k.a C4R model, model of Autodesk Collaboration for Revit).
If so, we can call APS Data Management to obtain the projectGuid and modelGuid in the model's version tip like below.
{
"type":"versions",
"id":"urn:adsk.wipprod:fs.file:vf.abcd1234?version=1",
"attributes":{
"name":"fileName.rvt",
"displayName":"fileName.rvt",
...
"mimeType":"application/vnd.autodesk.r360",
"storageSize":123456,
"fileType":"rvt",
"extension":{
"type":"versions:autodesk.bim360:C4RModel",
....
"data":{
...
"projectGuid":"48da72af-3aa6-4b76-866b-c11bb3d53883",
....
"modelGuid":"e666fa30-9808-42f4-a05b-8cb8da576fe9",
....
}
}
},
....
}
Afterward, open the C4R model using Revit API like below:
var region = ModelPathUtils.CloudRegionUS; //!<<< depends on where your BIM360/ACC account is based, US or EU.
var projectGuid = new Guid("48da72af-3aa6-4b76-866b-c11bb3d53883");
var modelGuid = new Guid("e666fa30-9808-42f4-a05b-8cb8da576fe9");
var modelPath = ModelPathUtils.ConvertCloudGUIDsToCloudPath( region, projectGuid, modelGuid ); //!<<< For Revit 2023 and newer.
//var modelPath = ModelPathUtils.ConvertCloudGUIDsToCloudPath( projectGuid, modelGuid ); //!<<< For Revit 2019 ~ 2022
var openOptions = new OpenOptions();
app.OpenAndActivateDocument( modelPath, openOptions ); //!<<< on desktop
// app.OpenDocumentFile( modelPath, openOptions ); //!<<< on Design Automation for Revit or don't want to activate the model on Revit desktop.
References:
*
*https://aps.autodesk.com/blog/accessing-bim-360-design-models-revit
*https://thebuildingcoder.typepad.com/blog/2020/04/revit-2021-cloud-model-api.html#4.4
*https://help.autodesk.com/view/RVT/2023/ENU/?guid=Revit_API_Revit_API_Developers_Guide_Introduction_Application_and_Document_CloudFiles_html | unknown | |
d6293 | train | To second Paul's response: yes, ctags (especially exuberant-ctags (http://ctags.sourceforge.net/)) is great. I have also added this to my vimrc, so I can use one tags file for an entire project:
set tags=tags;/
A: Use gd or gD while placing the cursor on any variable in your program.
*
*gd will take you to the local declaration.
*gD will take you to the global declaration.
more navigation options can be found in here.
Use cscope for cross referencing large project such as the linux kernel.
A: TL;DR:
You can do this using internal VIM functionality but a modern (and much easier) way is to use COC for intellisense-like completion and one or more language servers (LS) for jump-to-definition (and way way more). For even more functionality (but it's not needed for jump-to-definition) you can install one or more debuggers and get a full blown IDE experience.
Best second is to use native VIM's functionality called define-search but it was invented for C preprocessor's #define directive and for most other languages requires extra configuration, for some isn't possible at all (also you miss on other IDE features). Finally, a fallback to that is ctags.
Quick-start:
*
*install vim-plug to manage your VIM plug-ins
*add COC and (optionally) Vimspector at the top of ~/.vimrc:
call plug#begin()
Plug 'neoclide/coc.nvim', {'branch': 'release'}
Plug 'puremourning/vimspector'
call plug#end()
" key mappings example
nmap <silent> gd <Plug>(coc-definition)
nmap <silent> gD <Plug>(coc-implementation)
nmap <silent> gr <Plug>(coc-references)
" there's way more, see `:help coc-key-mappings@en'
*call :source $MYVIMRC | PlugInstall to reload VIM config and download plug-ins
*restart vim and call :CocInstall coc-marketplace to get easy access to COC extensions
*call :CocList marketplace and search for language servers, e.g.:
*
*type python to find coc-jedi,
*type php to find coc-phpls, etc.
*(optionally) see :h VimspectorInstall to install additional debuggers, e.g.:
*
*:VimspectorInstall debugpy,
*:VimspectorInstall vscode-php-debug, etc.
Full story:
Language server (LS) is a separate standalone application (one for each programming language) that runs in the background and analyses your whole project in real time exposing extra capabilities to your editor (any editor, not only vim). You get things like:
*
*namespace aware tag completion
*jump to definition
*jump to next / previous error
*find all references to an object
*find all interface implementations
*rename across a whole project
*documentation on hover
*snippets, code actions, formatting, linting and more...
Communication with language servers takes place via Language Server Protocol (LSP). Both nvim and vim8 (or higher) support LSP through plug-ins, the most popular being Conquer of Completion (COC).
List of actively developed language servers and their capabilities is available on Lang Server website. Not all of those are provided by COC extensions. If you want to use one of those you can either write a COC extension yourself or install LS manually and use the combo of following VIM plug-ins as alternative to COC:
*
*LanguageClient - handles LSP
*deoplete - triggers completion as you type
Communication with debuggers takes place via Debug Adapter Protocol (DAP). The most popular DAP plug-in for VIM is Vimspector.
Language Server Protocol (LSP) was created by Microsoft for Visual Studio Code and released as an open source project with a permissive MIT license (standardized by collaboration with Red Hat and Codenvy). Later on Microsoft released Debug Adapter Protocol (DAP) as well. Any language supported by VSCode is supported in VIM.
I personally recommend using COC + language servers provided by COC extensions + ALE for extra linting (but with LSP support disabled to avoid conflicts with COC) + Vimspector + debuggers provided by Vimspector (called "gadgets") + following VIM plug-ins:
call plug#begin()
Plug 'neoclide/coc.nvim'
Plug 'dense-analysis/ale'
Plug 'puremourning/vimspector'
Plug 'scrooloose/nerdtree'
Plug 'scrooloose/nerdcommenter'
Plug 'sheerun/vim-polyglot'
Plug 'yggdroot/indentline'
Plug 'tpope/vim-surround'
Plug 'kana/vim-textobj-user'
\| Plug 'glts/vim-textobj-comment'
Plug 'janko/vim-test'
Plug 'vim-scripts/vcscommand.vim'
Plug 'mhinz/vim-signify'
call plug#end()
You can google each to see what they do.
Native VIM jump to definition:
If you really don't want to use Language Server and still want a somewhat decent jump to definition with native VIM you should get familiar with :ij and :dj which stand for include-jump and definition-jump. These VIM commands let you jump to any file that's included by your project or jump to any defined symbol that's in any of the included files. For that to work, however, VIM has to know how lines that include files or define symbols look like in any given language. You can set it up per language in ~/.vim/ftplugin/$file_type.vim with set include=$regex and set define=$regex patterns as described in :h include-search, although, coming up with those patterns is a bit of an art and sometimes not possible at all, e.g. for languages where symbol definition or file import can span over multiple lines (e.g. Golang). If that's your case the usual fallback is ctags as described in other answers.
A: Use ctags. Generate a tags file, and tell vim where it is using the :tags command. Then you can just jump to the function definition using Ctrl-]
There are more tags tricks and tips in this question.
A: If everything is contained in one file, there's the command gd (as in 'goto definition'), which will take you to the first occurrence in the file of the word under the cursor, which is often the definition.
A: As Paul Tomblin mentioned you have to use ctags.
You could also consider using plugins to select appropriate one or to preview the definition of the function under cursor.
Without plugins you will have a headache trying to select one of the hundreds overloaded 'doAction' methods as built in ctags support doesn't take in account the context - just a name.
Also you can use cscope and its 'find global symbol' function. But your vim have to be compiled with +cscope support which isn't default one option of build.
If you know that the function is defined in the current file, you can use 'gD' keystrokes in a normal mode to jump to definition of the symbol under cursor.
Here is the most downloaded plugin for navigation
http://www.vim.org/scripts/script.php?script_id=273
Here is one I've written to select context while jump to tag
http://www.vim.org/scripts/script.php?script_id=2507
A: Another common technique is to place the function name in the first column. This allows the definition to be found with a simple search.
int
main(int argc, char *argv[])
{
...
}
The above function could then be found with /^main inside the file or with :grep -r '^main' *.c in a directory. As long as code is properly indented the only time the identifier will occur at the beginning of a line is at the function definition.
Of course, if you aren't using ctags from this point on you should be ashamed of yourself! However, I find this coding standard a helpful addition as well.
A: 1- install exuberant ctags. If you're using osx, this article shows a little trick:
http://www.runtime-era.com/2012/05/exuberant-ctags-in-osx-107.html
2- If you only wish to include the ctags for the files in your directory only, run this command in your directory:
ctags -R
This will create a "tags" file for you.
3- If you're using Ruby and wish to include the ctags for your gems (this has been really helpful for me with RubyMotion and local gems that I have developed), do the following:
ctags --exclude=.git --exclude='*.log' -R * `bundle show --paths`
credit: https://coderwall.com/p/lv1qww
(Note that I omitted the -e option which generates tags for emacs instead of vim)
4- Add the following line to your ~/.vimrc
set autochdir
set tags+=./tags;
(Why the semi colon: http://vim.wikia.com/wiki/Single_tags_file_for_a_source_tree )
5- Go to the word you'd like to follow and hit ctrl + ] ; if you'd like to go back, use ctrl+o (source: https://stackoverflow.com/a/53929/226255)
A: g* does a decent job without ctags being set up.
That is, type g,* (or just * - see below) to search for the word under the cursor (in this case, the function name). Then press n to go to the next (or Shift-n for previous) occurrence.
It doesn't jump directly to the definition, given that this command just searches for the word under the cursor, but if you don't want to deal with setting up ctags at the moment, you can at least save yourself from having to re-type the function name to search for its definition.
--Edit--
Although I've been using g* for a long time, I've recently discovered two shortcuts for these shortcuts!
(a) * will jump to the next occurrence of the word under the cursor. (No need to type the g, the 'goto' command in vi).
(b) # goes to the previous occurrence, in similar fashion.
N and n still work, but '#' is often very useful to start the search initially in the reverse direction, for example, when looking for the declaration of a variable under the cursor.
A: Install cscope. It works very much like ctags but more powerful. To go to definition, instead of Ctrl + ], do Ctrl + \ + g. Of course you may use both concurrently. But with a big project (say Linux kernel), cscope is miles ahead.
A: After generating ctags, you can also use the following in vim:
:tag <f_name>
Above will take you to function definition. | unknown | |
d6294 | train | Found the answer to this.
Instead of using the .change event, I switched it to .click and everything worked fine.
Hope this helps someone.
Slap
A: For those not using JQuery, the onClick event is what you want.
It appears that onClick has the behavior of what we intuitively call "select". That is, onClick will capture both click events and tab events that select a radio button. onClick on Chrome
onClick has this behavior across all browsers (I've tested with IE 7-9, FF 3.6, Chrome 10, & Safari 5). | unknown | |
d6295 | train | You can define your router ahead of time; it won't do anything until you call Backbone.History.start().
You can bind the "reset" event on your collection to start history like this:
my_collection.bind("reset", _.once(Backbone.History.start, Backbone.History))
Then the router will start doing stuff when your collection is fully loaded. I'm not sure if this is exactly what you're looking for (since you mentioned having a variable number of collections).
I have a similar situation, except that I know in advance which collections I want to have loaded before I start routing. I added a startAfter method to my Router, like so:
window.Workspace = new (Backbone.Router.extend({
. . .
startAfter: function(collections) {
// Start history when required collections are loaded
var start = _.after(collections.length, _.once(function(){
Backbone.history.start()
}))
_.each(collections, function(collection) {
collection.bind('reset', start, Backbone.history)
});
}
}));
and then after I've setup my collections
Workspace.startAfter([collection_a, collection_b, ...])
This could be adapted to work with standalone models too, although I think you'd need to bind to something other than the 'reset' event.
I'm glad I read your example code, the use of _.once and _.defer pointed me in the right direction.
A: I'm just checking in my .render() method that all required fields are filled, before using it. If it's not filled yet - i'm rendering an 'Loading...' widget.
And all my views are subscribed to model changes, by this.model.bind('change', this.render, this);, so just after model will be loaded, render() will be called again. | unknown | |
d6296 | train | The easiest solution (for your example) is to remove the line
plt.xlim([0,200])
But since you've put it there, I assume that you really want/need it there. So then, you have to manually adapt the height of the colorbar:
cb = plt.colorbar(mappable=s, ax=ax)
plt.draw()
posax = ax.get_position()
poscb = cb.ax.get_position()
cb.ax.set_position([poscb.x0, posax.y0, poscb.width, posax.height])
Using the shrink argument of colorbar as @MaxNoe suggests might also do the trick. But you will have to fiddle around to get the right value. | unknown | |
d6297 | train | You must explicitly set proxy_http_version to 1.1 to make it work, otherwise it uses 1.0 by default.
server {
listen 80;
server_name DOMAIN;
location /${TG_BOT_TOKEN} {
proxy_http_version 1.1;
proxy_pass http://pp-telegram-bot.default.svc.cluster.local:8000/${TG_BOT_TOKEN}/;
}
}
A: The problem is caused by nginx .Every requests that passed through nginx to your python-telegram-bot will return the HTTP Status "426 Upgrade Required." By default, nginx still uses HTTP/1.0 for upstream connections while istio via the envoy proxy does not support HTTP/1.0
So You need to force Nginx to use HTTP/1.1 for upstream connections.
server {
listen 80;
server_name DOMAIN;
location /${TG_BOT_TOKEN} {
proxy_pass http://pp-telegram-bot:8000/${TG_BOT_TOKEN}/;
proxy_http_version 1.1; # there is will force 1.1
}
location /check {
return 200 'true';
}
} | unknown | |
d6298 | train | This is the correct code for the question
import UIKit
import WebKit
class ViewController: UIViewController, WKUIDelegate {
@IBOutlet weak var webView: WKWebView!
@IBOutlet weak var activityIndicator: UIActivityIndicatorView!
override func viewDidLoad() {
super.viewDidLoad()
webView.uiDelegate = self
// Do any additional setup after loading the view, typically from a nib.
let url = "https://example.com"
let request = URLRequest(url: URL(string: url)!)
self.webView.load(request)
self.webView.addObserver(self, forKeyPath: #keyPath(WKWebView.isLoading), options: .new, context: nil)
// Page Scrolling - false or true
webView.scrollView.isScrollEnabled = false
}
// Open new tab links
func webView(_ webView: WKWebView,
createWebViewWith configuration: WKWebViewConfiguration,
for navigationAction: WKNavigationAction,
windowFeatures: WKWindowFeatures) -> WKWebView? {
if navigationAction.targetFrame == nil, let url = navigationAction.request.url, let scheme = url.scheme {
if ["http", "https", "mailto"].contains(where: { $0.caseInsensitiveCompare(scheme) == .orderedSame }) {
UIApplication.shared.open(url, options: [:], completionHandler: nil)
}
}
return nil
}
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
if keyPath == "loading" {
if webView.isLoading {
activityIndicator.startAnimating()
activityIndicator.isHidden = false
} else {
activityIndicator.stopAnimating()
activityIndicator.isHidden = true
}
}
}
} | unknown | |
d6299 | train | IMPORTXML as well as IMPORTHMTL they can only see the source code, not the DOM shown on the web browser developer console.
If the the content that you want to scrape is added to the DOM by client-side JavaScript or the web browser engine, it can't be scraped by using IMPORTXML. | unknown | |
d6300 | train | yes you can do it actually you need to use this code in page life-cycle method
In page code block you can use something like this OR anywhere else
use RainLab\Pages\Classes\Page as StaticPage;
function onStart() {
$pageName = 'static-test';
$staticPage = StaticPage::load($this->controller->getTheme(), $pageName);
dd($staticPage->viewBag);
}
let me know if it you find any issues | unknown |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.