text
stringlengths 64
81.1k
| meta
dict |
---|---|
Q:
Why node-sass-middleware is not working?
I have installed the node-sass-middleware module on my express application, but i'm not getting that working, just because the middleware is reading an incorrect source, when i debug the console log is:
GET / 200 558.983 ms - 4651
source: /home/karim/Snippets/my-financial/public/stylesheets/sass/stylesheets/main.sass
dest: /home/karim/Snippets/my-financial/public/stylesheets/stylesheets/main.css
read: /home/karim/Snippets/my-financial/public/stylesheets/stylesheets/main.css
which both directories are wrong, why the middleware is adding the string stylesheets/ between the source/dest (..public/stylesheets/sass/) and the .sass file/.css file (main.sass and main.css)?
I have this configuration inside my app.js:
var sassMiddleware = require('node-sass-middleware');
...
...
var app = express();
app.use(sassMiddleware({
src: path.join(__dirname, 'public/stylesheets/sass'),
dest: path.join(__dirname, 'public/stylesheets'),
debug: true,
indentedSyntax: true,
outputStyle: 'compressed'
}));
Obviously this is not compiling anything, becuase the directories are wrong.
Inside the ..public/stylesheets/sass/ folder i just have one file, main.sass which i want to compile and move the result outside the sass/ folder, i mean at ..public/stylesheets/.
A:
That is because -- i am pretty sure -- on your html file there is something like that:
<head>
<!--All your head stuff and-->
<link rel="stylesheet" href="/stylesheets/main.css"/>
</head>
-- Lets call that href as yourAwesomeHref for a moment.
When your server receive any get request, the middleware will look for the compiled main.sass on /home/karim/Snippets/my-financial/public/stylesheets (dest option for the middleware) following by yourAwesomeHref, resulting this route:
/home/karim/Snippets/my-financial/public/stylesheets/stylesheets/main.css
Which that file obviously does not exist at all!
So you have to add prefix: "/stylesheets" on your middleware for avoid that problem.
The final code is:
var sassMiddleware = require('node-sass-middleware');
...
...
var app = express();
app.use(sassMiddleware({
src: path.join(__dirname, 'public/stylesheets/sass'),
dest: path.join(__dirname, 'public/stylesheets'),
debug: true,
indentedSyntax: true,
outputStyle: 'compressed',
prefix: '/stylesheets'
}));
| {
"pile_set_name": "StackExchange"
} |
Q:
Divisão resultando sempre em zero
O resultado de Alfa só está retornando 0. Por que?
package javaapplication4;
public class MediaMovelSuavizaçãoExp {
public double CalculoPrevisao(double[] valores){
double[] values = new double[valores.length];
//Calculando o valor de alfa
double alfa = 2 / ( values.length + 1);
return alfa;
}
}
A:
Está fazendo uma divisão de inteiros, então o resultado é inteiro, mesmo que depois guarde em um double. Então divida um double, assim:
return 2.0 / (values.length + 1);
Veja funcionando no ideone. E no repl.it. Também coloquei no GitHub para referência futura.
| {
"pile_set_name": "StackExchange"
} |
Q:
How is frame data stored in libav?
I am trying to learn to use libav. I have followed the very first tutorial on dranger.com, but I got a little confused at one point.
// Write pixel data
for(y=0; y<height; y++)
fwrite(pFrame->data[0]+y*pFrame->linesize[0], 1, width*3, pFile);
This code clearly works, but I don't quite understand why, particulalry I don't understand how the frame data in pFrame->data stored, whether or not it depends on the format/codec in use, why pFrame->data and pFrame->linesize is always referenced at index 0, and why we are adding y to pFrame->data[0].
In the tutorial it says
We're going to be kind of sketchy on the PPM format itself; trust us, it works.
I am not sure if writing it to the ppm format is what is causing this process to seem so strange to me. Any clarification on why this code is the way it is and how libav stores frame data would be very helpful. I am not very familiar with media encoding/decoding in general, thus why I am trying to learn.
A:
particularly I don't understand how the frame data in pFrame->data stored, whether or not it depends on the format/codec in use
Yes, It depends on the pix_fmt value. Some formats are planar and others are not.
why pFrame->data and pFrame->linesize is always referenced at index 0,
If you look at the struct, you will see that data is an array of pointers/a pointer to a pointer. So pFrame->data[0] is a pointer to the data in the first "plane". Some formats, like RGB have a singe plane, where all data is stored in one buffer. Other formats like YUV, use a separate buffer for each plane. e.g. Y = pFrame->data[0], U = pFrame->data[1], pFrame->data[3] Audio may use one plane per channel, etc.
and why we are adding y to pFrame->data[0].
Because the example is looping over an image line by line, top to bottom.
To get the pointer to the fist pixel of any line, you multiply the linesize by the line number then add it to the pointer.
| {
"pile_set_name": "StackExchange"
} |
Q:
Did Valerie Jarrett say she wanted "America to be a more Islamic country"?
This picture, being shared on Twitter, alleges that Valerie Jarrett, former Senior Advisor to the President, during the Obama administration, claimed to be of Iranian descent and of Islamic faith in her yearbook.
Image reads
Take a look at the 1977 Stanford Yearbook
I am a Iranian by birth and of my Islamic faith. I am also an American Citizen and I seek to help change America to be a more Islamic country. My faith guides me and I feel like it is going well in the transition of using freedom of religion in America against itself.
Did she say this in Stanford's '77 yearbook?
A:
No; the caption is written in the Calibri font. This font was not developed until 2002.
Source: https://www.fonts.com/font/microsoft-corporation/calibri
A:
The Stanford yearbook in 1977 is mostly photos without captions.
(There is no Valerie Bowman on this page anyway, because she didn't graduate until 1978.)
It does have a few short comments that seem to deal with everyday life on campus.
The faculty and staff kept reminding me to take advantage of all the benefits available to me because I was at this prestigious university ... My career placement counselor told me how to gather impressive recommendatiions and explained how to best present my litany of apprenticeships to land the job I wanted or get into the grad school of my choice. ...
[unsigned]
I took these photos from an eBay listing and thus cannot look at every last page to confirm that the quoted statement does not appear, but I think the idea that a yearbook that looks like this contains a manifesto of violent Islamic revolution is frankly an absurd far-right fever dream. 1977 was before the Iran revolution and political Islam was not on the American radar.
A:
Just addressed by Snopes apparently,
The quote to attributed “Valerie Jarrett, Stanford University, 1977” about her “seek[ing] to help change America to be a more Islamic country” is an unfounded one that has no source other than recent repetition (primarily on right-wing web sites and blogs). It’s also an anachronism, as “Valerie Jarrett” didn’t exist in 1977: she was born Valerie Bowman and didn’t take the latter surname until she married William Jarrett in 1983.
Also according to Snopes there is no evidence she's Muslim and her parents aren't Iranian -- though she was born in Iran.
| {
"pile_set_name": "StackExchange"
} |
Q:
ChangeHandler not recognising a blank date
I am checking for a change in value of a date. The ValueChangeHandler is recognising a date (e.g. 1/5/2014 is updated to the DB when entered). However, when I delete a date it is not recognised (i.e., the DB is not updated to null - I have tried Backspace, highlight and Del, overtyping with spaces). I then entered a new date (2/5/2014) and this was updated to the DB. Any ideas as to why this code does not recognise that I have removed the date please.
Regards,
Glyn
I have updated this with the code suggested by Braj. Unfortunately this did not work.
final DateBox awardedDate = new DateBox();
awardedDate.setFormat(new DefaultFormat(DateTimeFormat.getFormat("dd/MM/yyyy")));
awardedDate.setValue(ymAwards.getCaAwardedDate());
awardedDate.setWidth("75px");
//Add change handler for the awarded date.
//Only a Leader or Administrator can update the date
if (accountLevel.equals("Leader") || accountLevel.equals("Administrator")) {
awardedDate.addValueChangeHandler(new ValueChangeHandler<java.util.Date>() {
int pog = 0;
public void onValueChange(ValueChangeEvent<java.util.Date> event) {
if (pog == 0) {
pog++;
Window.alert("First change hadler.");
//Check for a null date and handle it for dateBoxArchived and dateBoxPackOut
java.sql.Date sqlDateAwarded = awardedDate.getValue() == null ? null : new java.sql.Date(awardedDate.getValue().getTime());
AsyncCallback<YMAwards> callback = new YMAwardedDateHandler<YMAwards>();
rpc.updateYMAwarded(youthMemberID, returnAwID, sqlDateAwarded, callback);
}else{
pog = 0;
}
}
});
awardedDate.getTextBox().addValueChangeHandler(new ValueChangeHandler<String>() {
@Override
public void onValueChange(ValueChangeEvent<String> event) {
if (event.getValue() == null) {
Window.alert("Second change hadler.");
//Check for a null date and handle it for dateBoxArchived and dateBoxPackOut
java.sql.Date sqlDateAwarded = awardedDate.getValue() == null ? null : new java.sql.Date(awardedDate.getValue().getTime());
AsyncCallback<YMAwards> callback = new YMAwardedDateHandler<YMAwards>();
rpc.updateYMAwarded(youthMemberID, returnAwID, sqlDateAwarded, callback);
}
}
});
}
A:
Add this line:
awardDate.setFireNullValues(true);
This was added in GWT 2.5.
| {
"pile_set_name": "StackExchange"
} |
Q:
Return a Dynamic png from Pylons
What I'm trying to do is have my Pylons app dynamically generate an image based on some data, and return it in such a way that it can be viewed in a browser.
So far I am generating my image like this:
import Image, ImageDraw
image = Image.new("RGB", (width, height),"black")
img_out = ImageDraw.Draw(image)
img_out.polygon(...
img_out.text(...
#etc
The image is successfully generated, and can even be saved to file like this:
img_out.save(filepath)
My problem is that I am not trying to write it to disk, but rather return it via a Pylons response. Based off of the answers to another question I was able to get this far:
import FileApp
my_headers = [('Content-Disposition', 'attachment; filename=\"' + user_filename + '\"'), ('Content-Type', 'text/plain')]
file_app = FileApp(filepath, headers=my_headers)
return file_app(request.environ, self.start_reponse)
Using this solution I can take a png I have saved on the server side and return it to the user for download. Still, there are two problems here. The first is that I am forced to write the file to disk and then serve it from disk, rather than simply using the image straight from code. The second is that it is actually returning the file, therefore a user is forced to download it rather than viewing it in their browser.
What I want is for the user to be able to view the file in their browser, not download it themselves. IDEALLY I wouldn't have to save the image to disk on the server side either, but I realize it is likely impossible to serve it without it living on either the server or client's computer.
So my question is this. Can I serve the image straight from code such that the user will simply see the image in their browser as the response to their request? If not, can I save the image to disk server side and serve it from there such that the user will see the image in their browser and not be prompted to download a file?
(For what it's worth I am using Python 2.6.2 and PasteScript 1.7.4.2)
A:
Browsers can accept raw data as part of the src attribute for img's base64 encoded...
from PIL import Image
from cStringIO import StringIO
a = Image.new('RGB', (10, 10), 'black')
# ...
buf = StringIO()
a.save(buf, 'png')
b64img = '<img src="data:image/png;base64,{0}" />'.format(buf.getvalue().encode('base64'))
So what you do here is build your image, save it into a string buffer in memory (instead of on disk), then encode it to base64... You return the <img> tag as part of the page (or purely by itself if being lazy) using whatever templates/etc... Maybe just a return Response(b64img) would do it...
| {
"pile_set_name": "StackExchange"
} |
Q:
extract images from clusters separately in kmeans python
i have done K-means clustering over a dataset of images after which i have 5 clusters. Now i want to extract the images from each clusters and save them separately. i have no idea how to do that. i have tried doing this but i am not able to access the images.
here is my code
import matplotlib.pyplot
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.externals import joblib
import numpy as np
import cv2
import sys
import pickle
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
import os
from skimage.feature import local_binary_pattern
# To calculate a normalized histogram
from scipy.stats import itemfreq
from sklearn.preprocessing import normalize
import cvutils
import csv
import numpy
from matplotlib.pyplot import imshow
from PIL import Image
import time
from sklearn.cluster import KMeans
start_time=time.time()
############################################################################################
dir_unknown = 'UntitledFolder'
trainingSet='/home/irum/Desktop/Face-Recognition/thakarrecog /UntitledFolder/UntitledFolder1'
imageLabels='/home/irum/Desktop/Face-Recognition/thakarrecog/class_train'
path='/home/irum/Desktop/Face-Recognition/thakarrecog/Clusters'
#Create CSV File
images_names = []
SEPARATOR=" "
print"start"
'''
for (dirname, dirnames, filenames) in os.walk(dir_unknown):
for subdirname in dirnames:
subject_path = os.path.join(dirname, subdirname)
for filename in os.listdir(subject_path):
abs_path = "%s/%s" % (subject_path, filename)
#csv_path = "%s%s%d" % (abs_path, SEPARATOR, label)
#print "%s%s%d" % (abs_path, SEPARATOR, label)
images_names.append("%s%s%d" % (abs_path, SEPARATOR, label))
#print images_names
with open('class_train1', 'w') as myfile:
wr = csv.writer(myfile,delimiter=' ', doublequote=False , quotechar=None, lineterminator='\r\n', skipinitialspace=True)
wr.writerow(imageLabels)
label = label + 1
'''
# Store the path of training images in train_images
train_images = cvutils.imlist(trainingSet)
print "Total Images",len(train_images)
# Dictionary containing image paths as keys and corresponding label as value
train_dic = {}
with open('/home/irum/Desktop/Face-Recognition/thakarrecog/class_train', 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter=' ')
for row in reader:
train_dic[row[0]] = row[1]
# List for storing the LBP Histograms, address of images and the corresponding label
X_test = []
X_name = []
y_test = []
print"Calculating LBP Histograms"
h1 = time.time()
# For each image in the training set calculate the LBP histogram
# and update X_test, X_name and y_test
for train_image in train_images:
# Read the image
im = cv2.imread(train_image)
# Convert to grayscale as LBP works on grayscale image
im_gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
radius = 3
# Number of points to be considered as neighbourers
no_points = 8 * radius
# Uniform LBP is used
lbp = local_binary_pattern(im_gray, no_points, radius, method='uniform')
# Calculate the histogram
x = itemfreq(lbp.ravel())
# Normalize the histogram
hist = x[:, 1]/sum(x[:, 1])
# Append image path in X_name
X_name.append(os.path.join(train_image))
# Append histogram to X_name
X_test.append(os.path.join(hist))
# Append class label in y_test
#y_test.append(train_dic[os.path.split(images_names)[1]])
h2 = time.time()
t = (h2 - h1)
print"Time taken by LBPH",t
# Dump the data
joblib.dump((X_name, X_test), "lbp.pkl", compress=3)
p1 = time.time()
print"Applying PCA on LBP Histograms"
X_test = np.array(X_test)
pca = PCA(n_components=26)
pca.fit(X_test)
pca_activations = pca.transform(X_test)
p2 = time.time()
t = (p2 - p1)
print"Time taken by PCA",t
t1 = time.time()
print"Applying t-SNE on PCA"
# then run the PCA-projected activations through t-SNE to get our final embedding
X = np.array(pca_activations)
tsne = TSNE(n_components=2, learning_rate=500, perplexity=50, verbose=2, angle=0.2, early_exaggeration=7.0).fit_transform(X)
print "t-SNE Type", type(tsne)
print"tsne",tsne
t2 = time.time()
t = (t2 - t1)
print"Time taken by t-SNE",t
n1 = time.time()
print"normalize t-sne points to {0,1}"
tx, ty = tsne[:,0], tsne[:,1]
tx = (tx-np.min(tx)) / (np.max(tx) - np.min(tx))
ty = (ty-np.min(ty)) / (np.max(ty) - np.min(ty))
n2 = time.time()
t = (n2 - n1)
print "Normalization completed in time",t
width = 5000
height = 5000
max_dim = 100
print "displaying"
full_image = Image.new('RGB', (width, height))
for img, x, y in zip(X_name, tx, ty):
#print "for loop"
tile = Image.open(img)
rs = max(1, tile.width/max_dim, tile.height/max_dim)
tile = tile.resize((tile.width/rs, tile.height/rs), Image.ANTIALIAS)
full_image.paste(tile, (int((width-max_dim)*x), int((height-max_dim)*y)))
full_image.save("myTSNE.png")
#matplotlib.pyplot.figure(figsize = (12,12))
#plt.imshow(full_image)
print "K-Means clustering"
#Convert Images to Float32
images = np.asarray(tsne, np.float32)
N = len(images)
images = images.reshape(N,-1)
#using kmeans clustring having 5 clusters
kmeans = KMeans(n_clusters=5)
#passing images to kmeans
kmeans.fit(images)
centroids = kmeans.cluster_centers_
labels = kmeans.labels_
colors = 10*['r.','g.','b.','c.','k.','y.','m.']
#I want to Move each cluster to seperate folder (5 clusters means 5 folders)
for i in range(len(images)):
print("coordinate:",images[i], "label:", labels[i])
plt.plot(images[i][0], images[i][1], colors[labels[i]], markersize = 10)
img = cv2.convertScaleAbs(images[i])
print "Images Type", img.dtype
pin=sorted([int(n[:n.find('.')]) for n in os.listdir(path)
if n[0]!='.' ]+[0])[-1] + 1
cv2.imwrite('%s/%s.png' % (path, pin), img)
plt.scatter(centroids[:, 0],centroids[:, 1], marker = "x", s=150, linewidths = 5, zorder = 10)
plt.show()
end_time=time.time()
total_time=t = (end_time - start_time)
print"Total execution time in seconds",total_time
i am trying to extract clusters here, but failing. I need images I clusters separately as an output so that I can manipulate them further.
`#I want to Move each cluster to seperate folder (5 clusters means 5 folders)
For i in range(len(images)):
print("coordinate:",images[i], "label:", labels[i])
plt.plot(images[i][0], images[i][1], colors[labels[i]], markersize = 10)
img = cv2.convertScaleAbs(images[i])
print "Images Type", img.dtype
`
I want images in red cluster separate, in blue cluster separate and so on, in separate folders actually. 5 clusters 5 folders.
I have accessed images like this:
for i,j in zip(images, labels):
if labels[j] == 1:
#print "Images Type", images.dtype
img = images[i]
pin=sorted([int(n[:n.find('.')]) for n in os.listdir(path)
if n[0]!='.' ]+[0])[-1] + 1
cv2.imwrite('%s/%s.png' % (path, pin), img)
but i am getting deformed images and in a very small size.
i get output like this
for an image like this
A:
From your code, it seems that you have your images here images and that the variable labels is an array with the same dimension, containing the class labels.
If you want to get all the images for a class called myclass, then simply do:
images_in_myclass = [i for i,j in zip(images, labels) where j=='myclass']
zip allows you to iterate over the two arrays element-wise, and you are only returning the images for which the label condition is satisfied.
| {
"pile_set_name": "StackExchange"
} |
Q:
JBoss EAP 7.1 Deployment Failed Integrator: Provider not found
We are migrating our applications from JBoss EAP 6.x.x to JBoss EAP 7.1. I have made all the required configurations on my JBoss 7. But while deploying one application on Jboss 7.1 I'm getting following error on admin console :
"failure-description" => {
"WFLYCTL0080: Failed services" => {
"jboss.persistenceunit.\"project-services.ear/pack-enterprise-domain-ejb.jar#packEnterpise\".__FIRST_PHASE__" => "java.util.ServiceConfigurationError: org.hibernate.integrator.spi.Integrator: Provider com.comp.pack.enterprise.domain.util.CustomEnversIntegrator not found
Caused by: java.util.ServiceConfigurationError: org.hibernate.integrator.spi.Integrator: Provider com.comp.pack.enterprise.domain.util.CustomEnversIntegrator not found "}},
"rolled-back" => true
}
In server.log file I got following Exception:
2019-02-27 13:28:19,666 WARN [org.jboss.modules] (ServerService Thread Pool -- 27) Failed to define class com.comp.pack.enterprise.domain.util.CustomEnversIntegrator in Module "deployment.project-services.ear" from Service Module Loader: java.lang.NoClassDefFoundError: Failed to link com/comp/pack/enterprise/domain/util/CustomEnversIntegrator (Module "deployment.project-services.ear" from Service Module Loader): org/hibernate/envers/event/EnversIntegrator
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.jboss.modules.ModuleClassLoader.defineClass(ModuleClassLoader.java:446)
at org.jboss.modules.ModuleClassLoader.loadClassLocal(ModuleClassLoader.java:274)
at org.jboss.modules.ModuleClassLoader$1.loadClassLocal(ModuleClassLoader.java:77)
at org.jboss.modules.Module.loadModuleClass(Module.java:713)
at org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:190)
at org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:412)
at org.jboss.modules.ConcurrentClassLoader.performLoadClass(ConcurrentClassLoader.java:400)
at org.jboss.modules.ConcurrentClassLoader.loadClass(ConcurrentClassLoader.java:116)
at org.hibernate.boot.registry.classloading.internal.ClassLoaderServiceImpl$AggregatedClassLoader.findClass(ClassLoaderServiceImpl.java:209)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:370)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.hibernate.boot.registry.classloading.internal.ClassLoaderServiceImpl.loadJavaServices(ClassLoaderServiceImpl.java:340)
at org.hibernate.integrator.internal.IntegratorServiceImpl.<init>(IntegratorServiceImpl.java:40)
at org.hibernate.boot.registry.BootstrapServiceRegistryBuilder.build(BootstrapServiceRegistryBuilder.java:213)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.buildBootstrapServiceRegistry(EntityManagerFactoryBuilderImpl.java:366)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.<init>(EntityManagerFactoryBuilderImpl.java:167)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.<init>(EntityManagerFactoryBuilderImpl.java:150)
at org.hibernate.jpa.boot.spi.Bootstrap.getEntityManagerFactoryBuilder(Bootstrap.java:28)
at org.hibernate.jpa.boot.spi.Bootstrap.getEntityManagerFactoryBuilder(Bootstrap.java:40)
at org.jboss.as.jpa.hibernate5.TwoPhaseBootstrapImpl.<init>(TwoPhaseBootstrapImpl.java:39)
at org.jboss.as.jpa.hibernate5.HibernatePersistenceProviderAdaptor.getBootstrap(HibernatePersistenceProviderAdaptor.java:199)
at org.jboss.as.jpa.service.PhaseOnePersistenceUnitServiceImpl.createContainerEntityManagerFactoryBuilder(PhaseOnePersistenceUnitServiceImpl.java:254)
at org.jboss.as.jpa.service.PhaseOnePersistenceUnitServiceImpl.access$900(PhaseOnePersistenceUnitServiceImpl.java:59)
at org.jboss.as.jpa.service.PhaseOnePersistenceUnitServiceImpl$1$1.run(PhaseOnePersistenceUnitServiceImpl.java:125)
at org.jboss.as.jpa.service.PhaseOnePersistenceUnitServiceImpl$1$1.run(PhaseOnePersistenceUnitServiceImpl.java:104)
at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:640)
at org.jboss.as.jpa.service.PhaseOnePersistenceUnitServiceImpl$1.run(PhaseOnePersistenceUnitServiceImpl.java:137)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at org.jboss.threads.JBossThread.run(JBossThread.java:320)
2019-02-27 13:28:19,666 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 27) MSC000001: Failed to start service jboss.persistenceunit."project-services.ear/pack-enterprise-domain-ejb.jar#packEnterpise".__FIRST_PHASE__: org.jboss.msc.service.StartException in service jboss.persistenceunit."project-services.ear/pack-enterprise-domain-ejb.jar#packEnterpise".__FIRST_PHASE__: java.util.ServiceConfigurationError: org.hibernate.integrator.spi.Integrator: Provider com.comp.pack.enterprise.domain.util.CustomEnversIntegrator not found
at org.jboss.as.jpa.service.PhaseOnePersistenceUnitServiceImpl$1$1.run(PhaseOnePersistenceUnitServiceImpl.java:128)
at org.jboss.as.jpa.service.PhaseOnePersistenceUnitServiceImpl$1$1.run(PhaseOnePersistenceUnitServiceImpl.java:104)
at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:640)
at org.jboss.as.jpa.service.PhaseOnePersistenceUnitServiceImpl$1.run(PhaseOnePersistenceUnitServiceImpl.java:137)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at org.jboss.threads.JBossThread.run(JBossThread.java:320)
Caused by: java.util.ServiceConfigurationError: org.hibernate.integrator.spi.Integrator: Provider com.comp.pack.enterprise.domain.util.CustomEnversIntegrator not found
at java.util.ServiceLoader.fail(ServiceLoader.java:239)
at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:372)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.hibernate.boot.registry.classloading.internal.ClassLoaderServiceImpl.loadJavaServices(ClassLoaderServiceImpl.java:340)
at org.hibernate.integrator.internal.IntegratorServiceImpl.<init>(IntegratorServiceImpl.java:40)
at org.hibernate.boot.registry.BootstrapServiceRegistryBuilder.build(BootstrapServiceRegistryBuilder.java:213)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.buildBootstrapServiceRegistry(EntityManagerFactoryBuilderImpl.java:366)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.<init>(EntityManagerFactoryBuilderImpl.java:167)
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.<init>(EntityManagerFactoryBuilderImpl.java:150)
at org.hibernate.jpa.boot.spi.Bootstrap.getEntityManagerFactoryBuilder(Bootstrap.java:28)
at org.hibernate.jpa.boot.spi.Bootstrap.getEntityManagerFactoryBuilder(Bootstrap.java:40)
at org.jboss.as.jpa.hibernate5.TwoPhaseBootstrapImpl.<init>(TwoPhaseBootstrapImpl.java:39)
at org.jboss.as.jpa.hibernate5.HibernatePersistenceProviderAdaptor.getBootstrap(HibernatePersistenceProviderAdaptor.java:199)
at org.jboss.as.jpa.service.PhaseOnePersistenceUnitServiceImpl.createContainerEntityManagerFactoryBuilder(PhaseOnePersistenceUnitServiceImpl.java:254)
at org.jboss.as.jpa.service.PhaseOnePersistenceUnitServiceImpl.access$900(PhaseOnePersistenceUnitServiceImpl.java:59)
at org.jboss.as.jpa.service.PhaseOnePersistenceUnitServiceImpl$1$1.run(PhaseOnePersistenceUnitServiceImpl.java:125)
... 7 more
}
My Deployment structure :
project-services.ear
+++ lib
+++ META-INF
+-- proj-impl.jar
+-- proj-domain-ejb.jar
com.comp.pack.enterprise.domain.util.CustomEnversIntegrator is
Custom Integrator which is part of jar present in lib folder.
A:
The issue was with the hibernate version provided by JBoss EAP 7.1. I was using Hibernate 3 in my code and JBoss 7.1 provides hibernate 5.1 by default. By updating my code according to Hibernate 5.1, I was able to resolve this issue.
| {
"pile_set_name": "StackExchange"
} |
Q:
Were people building FPGAs out of TTL logic prior to the first sales in 1984?
My experience growing up was that my Dad would program 'EEPROMs' or Flash ROMs from his Apple IIGS. (I don't know if that is similar to an FPGA or not). He used these in custom wire-wrap computers he was building (around 1986-1989).
We know that the first FPGAs went on sale in 1984.
I've just finished reading Charles Petzold's book, Code: The Hidden Language of Computer Hardware and Software. In it Charles explains building relays into gates, gates into logic components, and logic components into computing machines.
He talks about the book TTL Data Book for Design Engineers which first came out in 1973. (I'm assuming people had lists of TTL gate chips prior to this). This talks a lot about the Texas Instruments 7400 series of logic gate chips.
Now to me it seems you could combine TTL chips (or relays or transistors) into a CPU using the instructions in this book. But everything online says "use FPGAs instead! It's easier for that scale! [Unless you're getting the concepts and want to see the end to end view]". Fair enough - some pragmatism is good - it depends on what the outcome is.
When you look at the structure of an FGPA - it appears to only have slightly more complexity than a 4-bit adder.
Now I don't know when the first four-bit adder was invented. (My guess is it came out of Claude Shannon's work on boolean logic in 1948).
So somewhere between 1948 and 1984 someone must have thought of the idea of FPGAs.
Assumption: By "people" I mean any hobbyist or engineer who is not explicitly prototyping the next FPGA to be manufactured by Naval Surface Warfare Center, Altera and Xilinx.
My question is: Were people building FPGAs out of TTL logic prior to the first sales in 1984?
A:
Yes at one level we did do designs in TTL but usually not the whole structure of an FPGA. From memory I'm pretty sure that the QL ULA was wire wrapped first (I worked with David Karlin the QL designer) usually the problems were mostly to do with the signal timings due to longer wire lengths when you did stuff in TTL (I'm ignoring power and heat for this discussion).
At the sort of time period you're talking about I suspect most folks used things like the AMD2900 for custom CPU designs (the wikipedia page goes into a long list of designs)
In 1988 when I worked with David on a Ethernet chip design (a chip called ENZO) we breadboarded the data separator in TTL. Later (1990..2) we did initial prototypes of designs using Actel chips and I did a 80286 based support chipset design for an embedded system with most of an AT system built in (I skipped DMA as it wasn't needed) in a single part.
We ran out space when I did a networked chip for a mixing desk (1983?) (not that big IIRC less that 10000 gates) but we prototyped it in sections disabling bits of logic to gain enough space to test with. The Actel parts were the biggest available and the main logic block was 4 input mux followed by a latch. (Muxes make great logic gates you can do nearly all four input logic gates with one) They were very expensive at the time (I have a figure of £400 in my head which was probably over $1000 at the time) You could make a 4 bit gray counter out of 4 LUTs which I was rather pleased with. We didn't use Verilog/VHDL as you could beat it by hand fairly easily (also the designs were small enough to work with at the gate/latch/flipflop level).
We used a Toshiba sea of gates technology for our production ASICs a bit more flexible than a ULA but along similar lines.
In 1984 a FPGA would be too slow and small to produce a useful CPU. Certainly in the early 90s you would have been limited to single digit MHz and I seem to remember really struggling with propagation delays on the 80826 support chipset design I did.
A:
For the individual hobbyist or small development shop, the equivalent of FPGAs weren't available. Hardware logic was built from manually routed or wire-wrapped boards with many 74xx logic chips.
An intermediate step for volume production that was available in the early 1980s was the uncommitted logic array (“ULA”) or gate array. One of the earliest uses was in the Sinclair ZX81. While the final logic connection mask had to be applied at the factory, for volume production ULAs proved cheaper and more compact than discrete logic chips.
A:
Before FPGAs were PALs (Programmable Array Logic) and PLDs (Programmable Logic Devices). These had a programmable logic matrix of various forms, often followed by a flop to hold the result. Many variations were made, each trying to find the best package. These were roughly equivalent to MSI (medium scale integration) TTL circuits. Your comparison with a bit-slice adder is apt.
Eventually, as integration increased, more complex devices were made. I built some pretty useful logic around the AMD CPLDs (Complex Programmable Logic Devices), and also using some Atmel FPLAs (Field Programmable Logic Arrays).
FPGAs (Field Programmable Gate Arrays) went through a state where they contained every larger arrays of equivalent elements. Some were programmable like FLASH parts, and some loaded their logic configuration each time they were powered up.
The wheel of creation continues to turn, and now FPGAs often include processor cores, and special PHYs to connect with high-speed interfaces, such as USB, PCIe, DDR, and HDMI.
In the beginning, I designed programmable logic by drawing TTL diagrams and compiling that into fuse maps (sometimes by hand). Now, I do FPGA designs in Verilog and leave it up to the tool chain. Schematics as a representational tool don't scale as well as code.
| {
"pile_set_name": "StackExchange"
} |
Q:
Some admin pages redirecting to front page on save
I was developing a WordPress site locally, all was going well. I set up a preview site on a VPS I control. The account has the final domain, I added a subdomain and set the site up on that subdomain.
At this point, most things work. I can navigate the front of the site. In the backend, things get screwy. Several actions in the admin force a redirect to the front page of the site. For example, if I try to save permalinks with anything other than "Plain". No matter the permalinks settings, if I go to Appearance->Menus and try to save, it redirects to the home page. If I try to update a plugin using pretty updates (via ajax), it says it errored and the output produced is the markup for the front page.
This wasn't (and still isn't) a problem on the local copy. I've disabled all plugins and swapped the theme to 2017. No dice. WP-CLI find/replace on the database, all went square. Last caveat is that I keep the core files in a subdirectory (wp), which means the index.php in my site root does this:
require( dirname( __FILE__ ) . '/wp/wp-blog-header.php' );
And in wp-config.php I have to explicitly define:
define('WP_CONTENT_DIR', dirname(__FILE__).'/wp-content');
define('WP_CONTENT_URL', 'http://preview.mysite.com'.'/wp-content');
This keeps it from looking for wp-content in the same directory as the core files. I've used this setup many many times with no issues. I have one other WordPress site on this VPS using the same setup, but no subdomain, that works fine. I've scrapped the whole thing and started up to no avail.
Anyone ever seen anything like this?
wp-config.php:
<?php
/**
* The base configuration for WordPress
*
* The wp-config.php creation script uses this file during the
* installation. You don't have to use the web site, you can
* copy this file to "wp-config.php" and fill in the values.
*
* This file contains the following configurations:
*
* * MySQL settings
* * Secret keys
* * Database table prefix
* * ABSPATH
*
* @link https://codex.wordpress.org/Editing_wp-config.php
*
* @package WordPress
*/
// ** MySQL settings - You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'databasehere');
/** MySQL database username */
define('DB_USER', 'userhere');
/** MySQL database password */
define('DB_PASSWORD', 'passwordhere');
/** MySQL hostname */
define('DB_HOST', 'localhost');
/** Database Charset to use in creating database tables. */
define('DB_CHARSET', 'utf8');
/** The Database Collate type. Don't change this if in doubt. */
define('DB_COLLATE', '');
/**#@+
* Authentication Unique Keys and Salts.
*
* Change these to different unique phrases!
* You can generate these using the {@link https://api.wordpress.org/secret-key/1.1/salt/ WordPress.org secret-key service}
* You can change these at any point in time to invalidate all existing cookies. This will force all users to have to log in again.
*
* @since 2.6.0
*/
// salts defined properly here
/**#@-*/
/**
* WordPress Database Table prefix.
*
* You can have multiple installations in one database if you give each
* a unique prefix. Only numbers, letters, and underscores please!
*/
$table_prefix = 'wp_';
/**
* For developers: WordPress debugging mode.
*
* Change this to true to enable the display of notices during development.
* It is strongly recommended that plugin and theme developers use WP_DEBUG
* in their development environments.
*
* For information on other constants that can be used for debugging,
* visit the Codex.
*
* @link https://codex.wordpress.org/Debugging_in_WordPress
*/
define('WP_DEBUG', false);
define('WP_CONTENT_DIR', dirname(__FILE__).'/wp-content');
define('WP_CONTENT_URL', 'http://preview.url.com'.'/wp-content');
define( 'WP_HOME', 'http://' . $_SERVER['SERVER_NAME'] );
define( 'WP_SITEURL', WP_HOME . '/wp' );
/* That's all, stop editing! Happy blogging. */
/** Absolute path to the WordPress directory. */
if ( !defined('ABSPATH') )
define('ABSPATH', dirname(__FILE__) . '/');
/** Sets up WordPress vars and included files. */
require_once(ABSPATH . 'wp-settings.php');
And the directory structure:
- preview
- .git
- .gitignore
- .htaccess
- composer.json
- composer.lock
- index.php
- readme.md
- vendor/
- wp/
- license.txt
- readme.html
- wp-activate.php
- wp-admin
- wp-includes
- and all the other usual root files
- wp-config.php
- wp-content/ <-- has what you would expect: plugins, themes, uploads...
A:
Problems were due to the OWASP ModSecurity rules enabled on the server and causing false positives. Disabled, solved. Not a new problem. Also answered here.
| {
"pile_set_name": "StackExchange"
} |
Q:
Jquery to pure javascript and how the intepreter looks up with dom for elements
I have a couple of questions about the inner workings of JavaScript and how the interpreter handles certain queries
The following JQuery will correctly get all the images that contain the word "flowers" in the src
$("img[src*='flowers']");
Jquery makes this very simple but what the pure javascript version?
We have a very large DOM. I take it if I do $("*[src*='flowers']") this will greatly affect performance (wildcard element). I'm interested in what the Javascript interpreter does differently between $("img[src*='flowers']") and $("*[src*='flowers']")
A:
Well, the clearest way to explain the difference is to show you how you'd write both DOM queries in plain JS:
jQuery's $("img[src*='flowers']"):
var images = document.getElementsByTagName('img');//gets all img tags
var result = [];
for (var i = 0; i < images.length;i++)
{
if (images[i].getAttribute('src').indexOf('flowers') !== -1)
{//if img src attribute contains flowers:
result.push(images[i]);
}
}
So as you can see, you're only searching through all img elements, and checking their src attribute. If the src attribute contains the substring "flowers", the add it to the result array.
Whereas $("[src*='flowers']") equates to:
var all = document.getElementsByTagName('*');//gets complete DOM
var result = [];
for (var i =0; i <all.length; i++)
{
if (all[i].hasAttribute('src') && all[i].getAttribute('src').indexOf('flowers') !== -1)
{//calls 2 methods, for each element in DOM ~= twice the overhead
result.push(all[i]);
}
}
So the total number of nodes will be a lot higher than just the number of img nodes. Add to that the fact that you're calling two methods (hasAttribute and getAttibute) for all img elements (thanks to short-circuit evaluation, all elements that don't have an src attribute, the getAttribute method won't be called) there's just a lot more going on behind the scenes in order for you to get the same result.
note:
I'm not saying that this is exactly how jQuery translates the DOM queries for you, it's a simplified version, but the basic principle stands. The second version (slower version) just deals with a lot more elements than the first. That's why it's a lot slower, too.
| {
"pile_set_name": "StackExchange"
} |
Q:
Why django don't need all fields to test a model
I have a model like this:
class CreateDeal(models.Model):
name = models.CharField(max_length=100)
fuel = models.CharField(max_length=15)
mileage = models.PositiveIntegerField(db_index=True)
phone_number = models.CharField(max_length=17)
location = models.CharField(max_length=100, db_index=True)
car_picture = models.ImageField(upload_to='car_picture')
description = models.TextField()
price = models.PositiveSmallIntegerField(db_index=True)
available = models.BooleanField(default=True)
created_on = models.DateTimeField(default=timezone.now)
user = models.ForeignKey(User, on_delete=models.CASCADE)
def __str__(self):
return self.name
and I have a test class to test the model above like this:
class CreateDealTest(TestCase):
def setUp(self):
self.user = User.objects.create_user(
username='alfa', email='[email protected]', password='top_secret'
)
self.deal = CreateDeal.objects.create(
name='deal1', mileage=100, price=25, user=self.user
)
def test_deal_name(self):
deal = CreateDeal.objects.get(name='deal1')
expected_deal_name = f'{self.deal.name}'
self.assertAlmostEqual(expected_deal_name, str(deal))
if I run the test I have:
Ran 1 test in 0.166s
OK
My question is why django don't raise an exception since almost all fields in my model are required. And what I don't understand is if I remove one field of Createdeal in my setUp (like mileage, price, user or name) I have an error.
For instance if I remove mileage, I have this error:
raise utils.IntegrityError(*tuple(e.args))
django.db.utils.IntegrityError: (1048, "Column 'mileage' cannot be null")
A:
Charfield, Imagefield and Textfield can be empty string which is valid at the database level, some of your fields have default values so they will be written if not set so that makes them also valid at the database level.
PositiveIntegerField and Foreign key cannot be set to empty string, just to value or null so they will fail since null=False by default.
The default blank=False option is only applied at the validation level, not at the database level. This means if you call full_clean() on your model, it will raise a ValidationError. But nothing stops you from saving an invalid model (save() does not call full_clean() as explained here).
| {
"pile_set_name": "StackExchange"
} |
Q:
Codeigniter PHP - loading a view at an anchor point
I have a form at the bottom of a long page, if a user fills out the form but it doesn't validate the page is reloaded in the typical codeigniter fashion:
$this->load->view('template',$data);
however because the form is way down at the bottom of the page I need the page to load down there like you do with HTML anchors. Does anyone know how to do this in codeigniter?
I can't use the codeigniter
redirect();
function because it loses the object and the validation errors are gone. Other frameworks I've used like Yii you can call the redirect function like:
$this->redirect();
which solves the problem because you keep the object. I've tried using:
$this->index()
within the controller which works fine as a redirect but the validation errors are in another method which is where the current page is loaded from:
$this->item($labs)
but when I use this it get stuck in a loop
Any ideas? I've seen this question a lot on the net but no clear answers. I'm researching using codeigniter "flash data" but think it's a bit overkill.
cheers.
A:
I can't personally vouch for this, but according to this thread if you append the anchor to the form's action, it will work.
CodeIgniter helper:
<?php echo form_open('controller/function#anchor'); ?>
Or vanilla HTML:
<form method='post' action='controller/function#anchor'>
If you were open to using Javascript, you could easily detect a $validation_failed variable and appropriately scroll. Or, even better, use AJAX.
Another option is to put the form near the top of the page?
| {
"pile_set_name": "StackExchange"
} |
Q:
Argument 'cotroller' is not a function, got undefined
This is my HTML file
<head>
<meta charset="UTF-8"/>
<title>CBIR</title>
<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/css/bootstrap.min.css" rel="stylesheet">
<script src="http://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="../js/angular.js"></script>
<script src="../js/AngularCotroller.js"></script>
<script src="../js/ApiCallService.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/js/bootstrap.min.js" type="text/javascript"></script>
</head>
<body>
<div ng-app="App">
<div ng-controller="AppController">
<div class="form-group row">
<button id="btnGetKey" class="btn btn-default" ng-click="btnGetKey()">Get Key</button>
<p> {{message}}</p>
</div>
<br>
</div>
</div>
</div>
</body>
</script>
This is AngularController.js file
var angularmodule = angular.module('App', []);
angularmodule.controller('AppController', function ($scope, $http, ApiCall) {
//Intital message value
$scope.message = "Don't Give up";
$scope.btnGetkey = function () {
var result = ApiCall.GetKeyFromServer().success(function (data) {
var data = $.parseJSON(JSON.parse(data));
$scope.message = data;
$scope.message = "123123123";
});
};
});
This is ApiServiceCall file
angularmodule.service('ApiCall', ['$http', function ($http)
{
var result;
// This is used for calling get methods from web api
this.GetKeyFromServer = function () {
result = $http.get('http://localhost:8090/CBIR/checkReturn').success(function (data, status) {
result = (data);
}).error(function () {
alert("Something went wrong");
});
return result;
};
}]);
When i load the index file Error: [ng:areq] Argument 'AppController' is not a function, got undefined
http://errors.angularjs.org/1.2.16/ng/areq?p0=AppController&p1=not%20a%20function%2C%20got%20undefined
i get this error. Please help
A:
You alias one parameter, but the actual function uses three:
['$scope', function ($scope, $http, ApiCall)
You need either to specify all of them, like this:
['$scope', '$http', 'ApiCall', function ($scope, $http, ApiCall)
or if you're not going to minimize your code, just use:
function ($scope, $http, ApiCall)
As @Slytherin suggested, the other error you have is in your service file:
you have a typo: AnguarModule != angular.module
and even if it wasn't a typo, you're reinstantiating your module, instead of referencing it (the difference being the second parameter - see the docs)
| {
"pile_set_name": "StackExchange"
} |
Q:
Using Application context everywhere?
In an Android app, is there anything wrong with the following approach:
public class MyApp extends android.app.Application {
private static MyApp instance;
public MyApp() {
instance = this;
}
public static Context getContext() {
return instance;
}
}
and pass it everywhere (e.g. SQLiteOpenHelper) where context is required (and not leaking of course)?
A:
There are a couple of potential problems with this approach, though in a lot of circumstances (such as your example) it will work well.
In particular you should be careful when dealing with anything that deals with the GUI that requires a Context. For example, if you pass the application Context into the LayoutInflater you will get an Exception. Generally speaking, your approach is excellent: it's good practice to use an Activity's Context within that Activity, and the Application Context when passing a context beyond the scope of an Activity to avoid memory leaks.
Also, as an alternative to your pattern you can use the shortcut of calling getApplicationContext() on a Context object (such as an Activity) to get the Application Context.
A:
In my experience this approach shouldn't be necessary. If you need the context for anything you can usually get it via a call to View.getContext() and using the Context obtained there you can call Context.getApplicationContext() to get the Application context. If you are trying to get the Application context this from an Activity you can always call Activity.getApplication() which should be able to be passed as the Context needed for a call to SQLiteOpenHelper().
Overall there doesn't seem to be a problem with your approach for this situation, but when dealing with Context just make sure you are not leaking memory anywhere as described on the official Google Android Developers blog.
A:
Some people have asked: how can the singleton return a null pointer?
I'm answering that question. (I cannot answer in a comment because I need to post code.)
It may return null in between two events: (1) the class is loaded, and (2) the object of this class is created. Here's an example:
class X {
static X xinstance;
static Y yinstance = Y.yinstance;
X() {xinstance=this;}
}
class Y {
static X xinstance = X.xinstance;
static Y yinstance;
Y() {yinstance=this;}
}
public class A {
public static void main(String[] p) {
X x = new X();
Y y = new Y();
System.out.println("x:"+X.xinstance+" y:"+Y.yinstance);
System.out.println("x:"+Y.xinstance+" y:"+X.yinstance);
}
}
Let's run the code:
$ javac A.java
$ java A
x:X@a63599 y:Y@9036e
x:null y:null
The second line shows that Y.xinstance and X.yinstance are null; they are null because the variables X.xinstance ans Y.yinstance were read when they were null.
Can this be fixed? Yes,
class X {
static Y y = Y.getInstance();
static X theinstance;
static X getInstance() {if(theinstance==null) {theinstance = new X();} return theinstance;}
}
class Y {
static X x = X.getInstance();
static Y theinstance;
static Y getInstance() {if(theinstance==null) {theinstance = new Y();} return theinstance;}
}
public class A {
public static void main(String[] p) {
System.out.println("x:"+X.getInstance()+" y:"+Y.getInstance());
System.out.println("x:"+Y.x+" y:"+X.y);
}
}
and this code shows no anomaly:
$ javac A.java
$ java A
x:X@1c059f6 y:Y@152506e
x:X@1c059f6 y:Y@152506e
BUT this is not an option for the Android Application object: the programmer does not control the time when it is created.
Once again: the difference between the first example and the second one is that the second example creates an instance if the static pointer is null. But a programmer cannot create the Android application object before the system decides to do it.
UPDATE
One more puzzling example where initialized static fields happen to be null.
Main.java:
enum MyEnum {
FIRST,SECOND;
private static String prefix="<", suffix=">";
String myName;
MyEnum() {
myName = makeMyName();
}
String makeMyName() {
return prefix + name() + suffix;
}
String getMyName() {
return myName;
}
}
public class Main {
public static void main(String args[]) {
System.out.println("first: "+MyEnum.FIRST+" second: "+MyEnum.SECOND);
System.out.println("first: "+MyEnum.FIRST.makeMyName()+" second: "+MyEnum.SECOND.makeMyName());
System.out.println("first: "+MyEnum.FIRST.getMyName()+" second: "+MyEnum.SECOND.getMyName());
}
}
And you get:
$ javac Main.java
$ java Main
first: FIRST second: SECOND
first: <FIRST> second: <SECOND>
first: nullFIRSTnull second: nullSECONDnull
Note that you cannot move the static variable declaration one line upper, the code will not compile.
| {
"pile_set_name": "StackExchange"
} |
Q:
MBProgressHUD Conditional execution
I have a tableview whose contents are generated with json array. It also employes location services so as users' location changes, the table is reloaded to reflect the changes of the table data relative to users' location.
My problem is I show the activity indicator in the beginning of the view load, but each time location updates and table reloads, the indicator is shown. This is done in very short intervals, causing the application to crash.
Is it possible to put a condition to following code so that it wonT show MBProgressHUD if it is not the initial load but it is there reload caused by the location changes?
// Customize the number of rows in the table view.
- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section {
HUD = [[MBProgressHUD alloc] initWithView:self.navigationController.view];
[self.navigationController.view addSubview:HUD];
[HUD showWhileExecuting:@selector(myTask) onTarget:self withObject:nil animated:YES];
return [rows count];
}
This is where the reload is performed:
- (void)locationManager:(CLLocationManager *)manager
didUpdateToLocation:(CLLocation *)newLocation
fromLocation:(CLLocation *)oldLocation
{
NSString *lat = [NSString stringWithFormat:@"%f", newLocation.coordinate.latitude];
NSString *longt = [NSString stringWithFormat:@"%f", newLocation.coordinate.longitude];
//CGFloat Alat = [lat floatValue];
//CGFloat Alon = [longt floatValue];
self.aalat = lat;
self.aalon = longt;
// NSLog(@"anlat: %@", aalat);
// NSLog(@"anlong: %@", aalon);
[[NSUserDefaults standardUserDefaults] setObject:aalat forKey:@"latitude"];
[[NSUserDefaults standardUserDefaults] setObject:aalon forKey:@"longitude"];
[tableview reloadData];
}
A:
I would trigger the HUD from your viewDidLoad or viewWillAppear methods to load things up initially. Firing from the tableview is going to give you lots of problems. The tableview methods are fired repeatedly when the user scrolls, so you will always have issues with that going on.
Fire a method from your location update to display the HUD. It has to run on the main thread anyway for the animations to work correctly. I would call the reload table from your new method to avoid any issues with your tableview.
| {
"pile_set_name": "StackExchange"
} |
Q:
Problema al pasar el dato que tengo en un input type="hidden"a un array de javascript
Al hacer click en el botón agregar carrito (llamo a una función de javascript que me agregaría el idLibro que esta en el input type="hidden" al array de la función de javascript) después pasaría ese array de javascript al controladorpara hacer algo (pero eso seria otro tema).
el input lo tengo así:
<input type="hidden" name="inputText" id="inputText" value="@item.IdLibro" />
El problema que tengo es que cuando hago click en el botón agregar a carrito en cualquier fila de mi tabla, siempre me agrega el mismo idLibro a el arrayde javascript. Por ejemplo:
tengo una tabla con 5 filas, en c/u de las filas tengo el botón agregar carrito y los datos del registro. Entonces si en la fila 1 hago click en agregar a carrito, en el array me guarda el idLibro 25, pero si hago click en agregar a carrito en la fila 4 también me vuelve a cargar el idLibro 25.
Aca el .cshtml
@model List<Librery_MVC.Models.Libro>
<p id="pText">hola</p>
<div class="table-responsive">
<table class="table table-striped table-primary mt-5 table-bordered" id="myTable">
<thead>
<tr>
<th></th>
<th>Id</th>
<th>Nombre</th>
<th>Autor</th>
<th>Categoria</th>
<th>Descripcion</th>
<th>Precio</th>
<th>Imagen</th>
</tr>
</thead>
<tbody>
@if (Model.Count() == 0)
{
<tr>
<td colspan="6" style="color:red">
No Match any document
</td>
</tr>
}
else
{
foreach (Libro item in Model)
{
autor = sa.getAutor(item.IdAutor);
editorial = es.GetEditorial(item.IdEditorial);
category = cs.getCategoria(item.IdCategoria);
<tr>
<th>
<input type="hidden" name="inputText" id="inputText" value="@item.IdLibro" />
@Html.ActionLink("Mostrar", "MostrarLibro", "Usser", new { idLibro = item.IdLibro }, new { @class = "btn btn-info" })
<button onclick="pushData();" class="btn btn-info">Agregar a carrito</button>
</th>
<th>@item.IdLibro</th>
<th class="col-md-2">@item.Nombre</th>
<th>@autor.Nombre</th>
<th>@category.Nombre</th>
<th class="col-md-3"><textarea rows="4" cols="40" readonly>@item.Descripcion</textarea></th>
<th>@item.Precio</th>
<th><img src="/@item.UrlImagen.Replace("\\", "/")" width="80" height="100" /></th>
</tr>
}
}
</tbody>
</table>
</div>
Y aca la función de javascript
<script type="text/javascript">
//creo el array
var myArr = [];
function pushData() {
//obtengo valores del input hidden "inputText"
var inputText = document.getElementById('inputText').value;
//añado los elementos al array
myArr.push(inputText);
var pval = "";
for (i = 0; i < myArr.length; i++) {
pval = pval + myArr[i] + "<br/>";
}
//muestro el array
document.getElementById('pText').innerHTML = pval;
}
</script>
A:
estás creando el hidden dentro de un bucle, luego lo que está ocurriendo es que se están creando varios hidden todos con el mismo id (inputText). Cuando haces
var inputText = document.getElementById('inputText').value;
Te devuelve el primer elemento que encuentra con ése id, independientemente del botón que hayas pulsado.
Podrías modificar tu función pushData para que reciba directamente el idLibro, algo así:
<button onclick="pushData('@item.IdLibro');" class="btn btn-info">Agregar a carrito</button>
y el script así:
<script type="text/javascript">
//creo el array
var myArr = [];
function pushData(idLibro) {
myArr.push(idLibro);
var pval = "";
for (i = 0; i < myArr.length; i++) {
pval = pval + myArr[i] + "<br/>";
}
//muestro el array
document.getElementById('pText').innerHTML = pval;
}
</script>
| {
"pile_set_name": "StackExchange"
} |
Q:
How to change the owner of a Stored Procedure in DB2?
Let's say that I have a procedure called schema.proc1.
How do I change the owner of it? I'm using db2 express-c 10.5
I've tried the following command running as the olduser,
transfer ownership of procedure schema.proc1 to user newuser preserve privileges;
I want to do this because I believe this is a constraint that I'm having to not being able to create or replace schema.proc1 as newuser.
I would like to point out what I've done regarding privileges and the newUser (all of this were done with the oldUser).
TRANSFER OWNERSHIP OF SCHEMA mySchema TO USER newUser PRESERVE PRIVILEGES;
GRANT ALTERIN ON SCHEMA mySchema TO USER newUser WITH GRANT OPTION;
GRANT CREATEIN ON SCHEMA mySchema TO USER newUser WITH GRANT OPTION;
GRANT DROPIN ON SCHEMA mySchema TO USER newUser WITH GRANT OPTION;
GRANT BINDADD ON DATABASE TO USER newUser with grant option;
GRANT CONNECT ON DATABASE TO USER newUser with grant option;
GRANT CREATETAB ON DATABASE TO USER newUser with grant option;
GRANT CREATE_EXTERNAL_ROUTINE ON DATABASE TO USER newUser with grant option;
GRANT CREATE_NOT_FENCED_ROUTINE ON DATABASE TO USER newUser with grant option;
GRANT IMPLICIT_SCHEMA ON DATABASE TO USER newUser with grant option;
GRANT LOAD ON DATABASE TO USER newUser with grant option;
GRANT QUIESCE_CONNECT ON DATABASE TO USER newUser with grant option;
GRANT EXPLAIN ON DATABASE TO USER newUser with grant option;
GRANT SQLADM ON DATABASE TO USER newUser with grant option;
GRANT WLMADM ON DATABASE TO USER newUser with grant option;
grant createin on schema mySchema to newUser with grant option;
grant alterin on schema mySchema to newUser with grant option;
grant dropin on schema mySchema to newUser with grant option;
Event with all of this, still no luck on replacing a stored procedure.
Thanks in advance!
A:
I believe you are correct on the syntax. A few points to note.
You cannot grant to yourself or revoke from yourself, so someone else may need to grant you the privileges you are after.
You may need the DB level privileges CREATE_EXTERNAL_ROUTINE and/or CREATE_NOT_FENCED_ROUTINE granted to you. May may also need BINDADD granted to you as well. These privileges allow you to create stored procedures.
You do need to be the owner of the stored procedure to do a REPLACE. Otherwise as long as you have CREATIN and DROPIN on the schema, you might be able to DROP and then CREATE the stored procedure.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to call MATLAB code from C?
I have some code that plots triangles in MATLAB.
I need to be able to somehow execute this code from my C program which generates these points.
Is that possible? How can it be done?
Just a thought:
Can I somehow embed MATLAB code in C, so that it can compile on a C compiler?
A:
The Mathworks site has full details; a demo video of calling the Matlab engine from C, and also the Matlab to C Compiler.
A:
As mentioned previously by answerers, you can call a live copy of MATLAB from C via the MATLAB Engine interface.
If the end-product needs to be used where there is no live copy of MATLAB, you can deploy the application using MATLAB Compiler. However, MATLAB Compiler does not, as another answer has suggested, convert MATLAB programs into C code (and hasn't done for a few versions now). Instead, it archives and encrypts your MATLAB program, and packages it into an executable or shared library that executes against the MATLAB Compiler Runtime (shareable royalty-free). The executable or shared library can then be called from C.
Alternatively you could go the other way around, and call your C code from MATLAB, using either loadlibrary or MATLAB's MEX interface.
Update: As of release R2011a, you can also use MATLAB Coder to generate C code directly from a subset of the MATLAB language.
| {
"pile_set_name": "StackExchange"
} |
Q:
Recursive Balancing Characters to the nth Degree, Python Recurison
I'm working on a problem where, given a string, any given number of 'a's must be followed by twice the number of 'b's. This must be recursive. For example:
abb("aabbbbabb")
>>>True
abb("aabbabbb")
>>>False
My intuition was to make a stack and push the letter 'a' to the stack and every time two 'b's are seen, pop 'a' from the stack. Instead, I decided to update the position and the amount of 'b's as variables. Assuming the first letter is always 'a', the amount of 'b's that should appear is represented by i + 2.
So far I have this tidbit:
def aabb(string, postion=None, i=None):
if position is None:
position = 0
if i is None:
i = 0
if position != len(string):
if string[position] == "a":
i = i +2
position = position +1
return aabb(string[position], position, i)
if string[position] =="b" and i >0:
i = i - 1
position = position +1
return aabb(string[position], position, i)
if string[position] == "a" and i != 0:
print "False"
if string[position] == "b" and i == 0:
print "False"
else:
print "True"
The problem I am running into is the following:
if position is None:
UnboundLocalError: local variable 'position' referenced before assignment
I am not sure how I am referencing it if it's an optional parameter? Otherwise, am I just going about this in an entirely wrong way? Thanks.
A:
Here's a pseudocode version:
def abb(s):
if s is empty:
return True # base case
else:
get number of leading "a"s # itertools.takewhile
if number is 0:
return False
else:
get twice that number of characters # itertools.islice
if they are all bs
return abb(remainder of string)
| {
"pile_set_name": "StackExchange"
} |
Q:
Submitting ios app with hackintosh
I developed my app on hackintosh and also sucesfully uploaded build for testing. I am wondering if I can just submit for review?
Have somebody ever heard of app being reject because it was submitted with hackintosh?
A:
in 2011 I used to develop apps on virtual machine with OSX running under window XP. Successfully submitted several app versions.
| {
"pile_set_name": "StackExchange"
} |
Q:
pointer to function and ODR
There are so many questions on ODR but I cannot find what I'm looking for, so apologies if this a duplicate or if the title is inappropriate.
Consider the following:
struct t {t(*id)();};
template<typename T>
t type() {return {type<T>};}
This is an over-simplification of my attempt to define a unique identifier per type, that hopefully remains unique across different compilation units.
In particular, given a concrete type T like std::string, and assuming two distinct compilation units include the above code in a header file, I would like expression
type<T>().id
to take the same value (of type t(*)()) in both units, hence serve as a unique identifier for type T.
The value is the address of function type<T>, so the question is whether a unique function type<T> in the program is guaranteed by the one-definition rule. iso 3.2/3 says
Every program shall contain exactly one definition of every non-inline function or variable that is odr-used in that program.
where by 3.2/2
A non-overloaded function whose name appears as a potentially-evaluated expression or [...], is odr-used, unless [...]
and I assume a function is non-inline if its address is taken (though I can't find that in the standard).
iso 3.2/5 lists a number of exceptions, but the only references to functions are
inline function with external linkage, [...], non-static function template, [...], member function of a class template, or template specialization for which some template parameters are not specified [...]
and none appears to be the case here.
A verifiable example would take more than one file. In fact, an example claimed to fail is given by Dieter Lücking, though it does not fail in my case (which I do not take as any form of "guarantee").
So, is this going to work or not?
A:
So 3.2/5 actually seems like pretty strong support. First note that a definition is a source code construct, not an object code construct, though clearly there is a very close relationship. 3.2/5 is saying that it's okay to have multiple definitions of non-static function templates, and that furthermore in such a case it must behave as if there were only a single definition. If a function had different addresses in different translation units, then that is not behaving as if there were only one definition, at least in my reading.
This is especially true since a function pointer can be passed as a non-type template argument. Such arguments must be constant and must be the same for all translation units. For example, a string literal cannot be such an argument precisely because its address varies across translation units.
Whether or not all the requirements are met will depend exactly on the context of the multiple definitions, since they deal with things such as name resolution, etc. However, they are all "run-of-the-mill" requirements that are of the "of-course" type. For example, a violation of it would be something like:
file1.cpp
static int i;
// This is your template.
template <typename T>
void foo() {
i; // Matches the above i.
}
file2.cpp
static int i;
// This is your template. You are normally allowed to have multiple
// identical definitions of it.
template <typename T>
void foo() {
// Oops, matches a different entity. You didn't satisfy the requirements.
// All bets are off.
i;
}
I know that multiple definitions are supported in Linux via weak symbols. In fact, on Linux the Lucking example fails to fail precisely because of this. I left a comment to his answer asking for platform. At link time, the linker will throw away all instances of a weak symbol except one. Obviously, if the instances aren't actually the same, that would be bad. But those requirements in 3.2/5 are designed to ensure that the instances are in fact all the same and thus the linker can keep only one.
ADDENDUM: Dieter Lucking now says that he had a compilation problem, and it in fact does not fail for him. It would be good if someone familiar with the internals of Windows DLLs could comment here, though, as to how Visual Studio handles this.
| {
"pile_set_name": "StackExchange"
} |
Q:
Grant access to Event Viewer "Application and Services Logs" via GPO
My monitoring team has requested to be able to read the logs under "Application and Services" in 2008/2012/2016 event viewer. These are the logs that reside in "%SystemRoot%\System32\Winevt\Logs\". Specifically, they're interested in the "Operations Manager" log, which deals with the MS SCOM client's health and activities.
I've tried:
Adding them to the "Event Log Readers" group on each server via GPO. This lets them get to the Application event log and System event log, but not the other logs.
Granting them read access to the "%SystemRoot%\System32\Winevt\Logs\Operations Manager.evtx" file
Granting them read access to the "%SystemRoot%\System32\Winevt\Logs\" folder.
None of these have helped, they get an access denied.
The ideal solution would be deployable by GPO, not require admin rights, and allow them to connect to a server remotely via Event Viewer without going through Remote Desktop, command line, or powershell.
I'm stuck. Any help is appreciated!
A:
Granting permission to the files is not going to provide access.
If you find that Event Log Readers does not have access to any of the logs under Applications and Services Logs, you can create a list of the log names and use wevtutil to grant your custom permission:
REM %%i in a cmd script, or %i if running interactively
FOR /F %%i in (Lognames.txt) DO (
REM Event Log Readers (S-1-5-32-573) security principal
wevtutil sl %%i /ca:O:BAG:SYD:(A;;0xf0007;;;SY)(A;;0x7;;;BA)(A;;0x1;;;BO)(A;;0x1;;;SO)(A;;0x1;;;S-1-5-32-573)
)
You may want to confirm which Event Log Readers the accounts have been added to. For member servers, they need to be added to the local Event Log Readers group. For domain controllers, the domain builtin Event Log Readers group.
| {
"pile_set_name": "StackExchange"
} |
Q:
node.js / ruby integration with beanstalkd
This is related to another question specific to payment processing, and that is my example use case, but I was considering trying to integrate node.js and ruby on the same server using beanstalkd. Basically, I want to use node.js as my main web server, but when I need to do some payment processing, I'd like to use something robust and stable like ruby.
I was considering trying to use beanstalkd as a way to have node.js queue up payment processing jobs for ruby to perform in the background. The documentation for beanstalkd is a little slim, so I'm having trouble figuring out if this is a good approach, or exactly how I would go about it. From what I can tell though, it should be fairly straightforward to start a beanstalkd process and then have node.js connect to it to send it jobs, and have a ruby script which can perform the jobs and send back the results.
A:
Beanstalk is appropriate for this task. Make sure you use the binlog option to make the jobs persistent between beanstalkd restarts.
Your node.js processes will use a tube (called, say 'payments') and put jobs into it, with an appropriate priority.
Your Ruby script can then watch the payments tube and process the jobs.
Make sure you give the jobs an adequate TTL - you want to ensure the payment processing has time to complete before beanstalk assumes the job has failed and re-queues it.
Just curious - how will you provide feedback to the customer that the payment has succeeded? Perhaps the Ruby script will update a record in the database?
A:
So after rooting around enough, I did find the documentation I really needed to evaluate beanstalkd. There is a protocol document in the source not linked to from anything I've been reading or the main page (although it is in a folder called doc) that gives better details about its capabilities and limitations.
It seems very nice as an asynchronous worker queue, which is perfect for node.js, and it would be a good fit to communicate with some Ruby code to do payment processing, but as dkam says, how do I get the response back to node.js to be able to update the client. While I think this makes sense for many tasks, it is not sufficient for mine.
Given alfred's advice, I've investigated redis, and while it isn't exactly what I need right out of the box, I think it will be sufficient. There is already an actor library out there built on top of redis for Ruby, so I think I should be able to make something simple that can talk between node and Ruby with roughly actor style semantics, or at least callback semantics.
| {
"pile_set_name": "StackExchange"
} |
Q:
ImageButton in gwt
I have to create a image button in gwt which uses three images(left side image,center stretch image and right image).Left side images and right images having rounded corners.Center Image wants to be stretched depends on button title size.Created ImageButton should have all the functionalities of Button.
Can anyone help me in how to achieve this.
A:
If you need a button with rounded corners then there are a number of options:
Create a new widget that extends the DecoratorPanel to create the rounded corners. The DecoratorPanel will result in a table (HTML). You'll probably want to replace the standard images. Look at the standard.css that GWT provides to find the styles that define those images, then override those styles in your custom stylesheet (look for the CSS class ".gwt-DecoratorPanel"). In the widget, add a Label widget to display the button text and provide get and set methods on your widget to get and set text to the internal label. The label will resize automatically forcing the table cell to grow bigger.
Create a new widget that extends Composite. The widget should wrap a FlexTable. Use 3 cells on the same row. Add a Label to the center cell and provide get and set methods on your widget to get and set text to the internal label. The label will resize automatically forcing the table cell to grow bigger. Add the appropriate handlers to the FlexTable widget. I suggest you use those events to add or remove styles to the appropriate cells and define the background images in a stylesheet.
You could create your own widget. This requires that you generate your own HTML etc. which may not immediately work in every browser. I recommend trying option 1 or 2 first.
| {
"pile_set_name": "StackExchange"
} |
Q:
Writing List of Directories with Subdirectories
Now, I know there are already a lot of questions on Stackoverflow about folder recursion and getting a folder including it's sub-directories etc. pp., but I haven't found anything related to what I'm encountering here.
My problem is as follows:
I've taken the code snippet about Folder Recursion from here (page bottom) and adapted it to my needs; that is, having it not write all (sub)directories to the console but having it add them to a list instead. Here is my code (note the part that's commented out):
private static List<String> ShowAllFoldersUnder(string path)
{
var folderList = new List<String>();
try
{
if ((File.GetAttributes(path) & FileAttributes.ReparsePoint)
!= FileAttributes.ReparsePoint)
{
foreach (string folder in Directory.GetDirectories(path))
{
folderList.Add(folder);
// Console.WriteLine(folder);
ShowAllFoldersUnder(folder);
}
}
}
catch (UnauthorizedAccessException) { }
return folderList;
}
This is how I call it (Dir is a string containing the path):
var _folders = ShowAllFoldersUnder(Dir);
foreach (string folder in _folders)
{
Console.WriteLine(folder);
}
The Problem is only the first level of folders is added to the list, meaning my output is for example:
[...]
C:\Users\Test\Pictures
C:\Users\Test\Recent
C:\Users\Test\Saved Games
C:\Users\Test\Searches
C:\Users\Test\SendTo
[...]
If I however uncomment Console.WriteLine(folder); from the method, it echoes all (sub)directories to the console:
[...]
C:\Users\Test\AppData\Roaming\Microsoft\Internet Explorer\Quick Launch\User Pinned
C:\Users\Test\AppData\Roaming\Microsoft\Internet Explorer\Quick Launch\User Pinned\ImplicitAppShortcuts
C:\Users\Test\AppData\Roaming\Microsoft\Internet Explorer\Quick Launch\User Pinned\TaskBar
C:\Users\Test\AppData\Roaming\Microsoft\Internet Explorer\UserData
C:\Users\Test\AppData\Roaming\Microsoft\Internet Explorer\UserData\Low
C:\Users\Test\AppData\Roaming\Microsoft\MMC
C:\Users\Test\AppData\Roaming\Microsoft\Network
[...]
I'm desperate after having spent hours researching what could be my mistake. Does anybody have a clue what's causing my problem?
A:
It would appear that you are doing nothing with the folders found in the recursive calls to ShowAllFoldersUnder.
This modification should resolve it. Change:
ShowAllFoldersUnder(folder);
to:
folderList.AddRange(ShowAllFoldersUnder(folder));
In production code I would likely refactor it to use a single List throughout the recursion so as to avoid any overhead of creating and merging multiple lists.
A:
modify your method to this
private static void ShowAllFoldersUnder(string path, List<string> folderList)
{
try
{
if ((File.GetAttributes(path) & FileAttributes.ReparsePoint)
!= FileAttributes.ReparsePoint)
{
foreach (string folder in Directory.GetDirectories(path))
{
folderList.Add(folder);
// Console.WriteLine(folder);
ShowAllFoldersUnder(folder, folderList);
}
}
}
catch (UnauthorizedAccessException) { }
}
now call it like this
var _folders = new List<string>();
ShowAllFoldersUnder(Dir, _folders);
this way you prevent many list creation and memory consumption in other answers. by using this way you supply an initial list to the method and it will add all the entries to it, but other answers will generate a list each time and then copy the result to the upper list and this will cause a lot of memory allocation, copy and deallocation.
| {
"pile_set_name": "StackExchange"
} |
Q:
Replace missing value without knowing exact position in AWK
I am trying to process a GTF/GFF file which I download from ensemble. The truncated version of the file looks like this:
1 ensembl gene 5273 10061 . - . gene_id ENSGALG00000054818; gene_version 1; gene_source ensembl; gene_biotype protein_coding;
1 ensembl transcript 5273 10061 . - . gene_id ENSGALG00000054818; gene_version 1; transcript_id ENSGALT00000098984; transcript_version 1; gene_source ensembl; gene_biotype protein_coding; transcript_source ensembl; transcript_biotype protein_coding;
1 ensembl gene 58427 58617 . + . gene_id ENSGALG00000047594; gene_version 1; gene_name RF00004; gene_source ensembl; gene_biotype snRNA;
1 ensembl transcript 58427 58617 . + . gene_id ENSGALG00000047594; gene_version 1; transcript_id ENSGALT00000094382; transcript_version 1; gene_name RF00004; gene_source ensembl; gene_biotype snRNA; transcript_name RF00004-201; transcript_source ensembl; transcript_biotype snRNA;
1 ensembl exon 58427 58617 . + . gene_id ENSGALG00000047594; gene_version 1; transcript_id ENSGALT00000094382; transcript_version 1; exon_number 1; gene_name RF00004; gene_source ensembl; gene_biotype snRNA; transcript_name RF00004-201; transcript_source ensembl; transcript_biotype snRNA; exon_id ENSGALE00000460125; exon_version 1;
1 ensembl gene 63264 63454 . + . gene_id ENSGALG00000049206; gene_version 1; gene_name RF00004; gene_source ensembl; gene_biotype snRNA;
1 ensembl transcript 63264 63454 . + . gene_id ENSGALG00000049206; gene_version 1; transcript_id ENSGALT00000092780; transcript_version 1; gene_name RF00004; gene_source ensembl; gene_biotype snRNA; transcript_name RF00004-201; transcript_source ensembl; transcript_biotype snRNA;
1 ensembl exon 63264 63454 . + . gene_id ENSGALG00000049206; gene_version 1; transcript_id ENSGALT00000092780; transcript_version 1; exon_number 1; gene_name RF00004; gene_source ensembl; gene_biotype snRNA; transcript_name RF00004-201; transcript_source ensembl; transcript_biotype snRNA; exon_id ENSGALE00000501941; exon_version 1;
(Nine tab separated columns.)
In some rows there are attributes missing like gene_name, transcript_id or transcript_name.
If gene_name is missing I wanted to replace it with gene_id,
and if transcript_name is missing I wanted to replace it with transcript_id (in the case of missing transcript_id it gets replaced by gene_id).
However, the information for transcript_id or lets better say the position of this information is unknown. How would I look for the attribute and in case it is missing, replace it with the value of transcript_id with unknown positional information
I achieved to replace the missing value for gene_name with the value for gene_id like this:
awk '{if (!/gene_name/) print $0, "gene_name " $10; else print $0}' input.gtf > output.gtf
This worked pretty fine but only because in this particular case I knew the position of the value that I used as a replacement. I could not figure out how I would achieve this when the position of the match is unknown.
I used the following code to get unknown position information but could not integrate a check for the missmatch like in the first example above:
awk '{for (i=1; i<=NF; ++i) { if ($i ~ "transcript_name") print$0,"transcript_name ", $(i+1) } }' input.gtf > output.gtf
The condition is that only if transcript_name is not already present in the row it should be replaced by the value for transcript_id.
I really would appreciate some help with this!
A:
Using an awk script;
script.awk:
#!/usr/bin/awk -f
BEGIN {
FS=OFS="\t"
}
{
gsub(/; *$/, "", $9) # trim trailing `;'
split($9, pairs, / *; */) # split attributes into pairs
for (i in pairs) {
split(pairs[i], kv, / */) # split pair into key and value
attr[kv[1]] = kv[2] # add it to `attr'
}
# fill missing fields
if (!("gene_name" in attr))
attr["gene_name"] = attr["gene_id"]
if (!("transcript_id" in attr))
attr["transcript_id"] = attr["gene_id"]
if (!("transcript_name" in attr))
attr["transcript_name"] = attr["transcript_id"];
# recreate the attributes field
attr_all = sep = ""
for (k in attr) {
attr_all = attr_all sep k " " attr[k]
sep = "; "
}
# update the record with new attributes
$9 = attr_all
}
1 # print record
Usage example:
awk -f script.awk inputfile
Online demo.
| {
"pile_set_name": "StackExchange"
} |
Q:
Representations of Lorentz group in interacting QFT
In QFT, we obtain a representation of the Lorentz group by defining a set of unitary operators whose action on (spinless) free particle states is given by
\begin{equation}
U(\Lambda) |k \rangle = |\Lambda k \rangle
\end{equation}
(and similarly for multiparticle states).
Physically, if two observers $O_1$ and $O_2$ are related by a Lorentz transformation $\Lambda$ and a free particle appears to observer $O_1$ to be in state $|k \rangle$, then it will appear to observer $O_2$ to be in state $|\Lambda k \rangle$. This transformation property extends to arbitrary elements of the Hilbert space, even elements that are not single particle momentum eigenstates: if $O_1$ sees a state $|s\rangle$ then $O_2$ sees a state $U(\Lambda)|s \rangle$.
My question is the following: If both observers are situated in a highly interacting region (i.e. a region of space where particles do not behave as free particles), and observer $O_1$ sees a state $|s \rangle$, what does observer $O_2$ see? Do we now need a new representation of the Lorentz group to answer this question? If so, how can such a representation be obtained if I know the Hamiltonian for the full interacting theory?
My guess is that we do need a new representation. I am basing this on the fact that, if we consider the full Poincare group, then it is clear that the free particle representation does not provide us with the correct transformation between observers, since it fails to give the correct time-translation.
A:
Yes you would need a new representation. The reason is the Lorentz group is connected with the group of translations and as you point out the time-translation is different. If $K^i$ is the generator of boosts in the $i$ direction and $P^j$ the generator of space translations (i.e. the linear momentum)
$$[K^i,P^j]=iH\delta_{ij}$$
So if the interacting Hamiltonian is different, the $K^i$ operators would need to be different as well.
However even in the interacting theory if you have a single stable particle it will transform nicely like a free particle under the interacting representation of the Poincare group. The catch is that the stable particles may be different from the free theory you started with. For instance if you have a single electron alone in the universe that state will transform like a particle. But also a single hydrogen atom in its ground state (in a theory in which it is stable) looks just as much like a single particle as far representations of the Poincare group is concerned.
Weinberg's QFT textbook (Volume I) has a good discussion of both these points. Chapter 2 goes over the representations of the Poincare group, and parts of Chapter 3 talk about the problem of the interacting boost generators.
| {
"pile_set_name": "StackExchange"
} |
Q:
Efficiently grouping elements that exists in both documents (inner join) in Xquery
I have the following data:
<Subjects>
<Subject>
<Id>1</Id>
<Name>Maths</Name>
</Subject>
<Subject>
<Id>2</Id>
<Name>Science</Name>
</Subject>
<Subject>
<Id>2</Id>
<Name>Advanced Science</Name>
</Subject>
</Subjects>
and:
<Courses>
<Course>
<SubjectId>1</SubjectId>
<Name>Algebra I</Name>
</Course>
<Course>
<SubjectId>1</SubjectId>
<Name>Algebra II</Name>
</Course>
<Course>
<SubjectId>1</SubjectId>
<Name>Percentages</Name>
</Course>
<Course>
<SubjectId>2</SubjectId>
<Name>Physics</Name>
</Course>
<Course>
<SubjectId>2</SubjectId>
<Name>Biology</Name>
</Course>
</Courses>
I wish to efficiently get elements from both documents that share the share the same Ids.
I want to get the result like this:
<Results>
<Result>
<Table1>
<Subject>
<Id>1</Id>
<Name>Maths</Name>
</Subject>
</Table1>
<Table2>
<Course>
<SubjectId>1</SubjectId>
<Name>Algebra I</Name>
</Course>
<Course>
<SubjectId>1</SubjectId>
<Name>Algebra II</Name>
</Course>
<Course>
<SubjectId>1</SubjectId>
<Name>Percentages</Name>
</Course>
</Table2>
</Result>
<Result>
<Table1>
<Subject>
<Id>2</Id>
<Name>Science</Name>
</Subject>
<Subject>
<Id>2</Id>
<Name>Advanced Science</Name>
</Subject>
</Table1>
<Table2>
<Course>
<SubjectId>2</SubjectId>
<Name>Physics</Name>
</Course>
<Course>
<SubjectId>2</SubjectId>
<Name>Biology</Name>
</Course>
</Table2>
</Result>
</Results>
So far I have 2 solutions:
<Results>
{
for $e2 in $t2/Course
let $foriegnId := $e2/SubjectId
group by $foriegnId
let $e1 := $t1/Subject[Id = $foriegnId]
where $e1
return
<Result>
<Table1>
{$e1}
</Table1>
<Table2>
{$e2}
</Table2>
</Result>
}
</Results>
and the otherway round:
<Results>
{
for $e1 in $t1/Subject
let $id := $e1/Id
group by $id
let $e2 := $t2/Course[SubjectId = $id]
where $e2
return
<Result>
<Table1>
{$e1}
</Table1>
<Table2>
{$e2}
</Table2>
</Result>
}
</Results>
Is there a more efficient way of doing this?
Perhaps taking advantages of multiple groups?
Update
A major issue with my code at the moment is that it's performance is highly dependent on which table is bigger. For example the 1st solution is better in cases where the 2nd table is bigger and vice versa.
A:
The solution you have looks reasonable to me. It will perform siginificantly better on a processor like Saxon-EE that does join optimization than on one (like Saxon-HE) that doesn't. If you want to hand-optimize it, your simplest approach is to switch to using XSLT: use the key() function to replace the filter expression $t1/Subject[Id = $foriegnId] which, in the absence of optimization, searches your second file once for each element selected in the first file.
| {
"pile_set_name": "StackExchange"
} |
Q:
JavaFX: How to use the GraphicsContext method appendSVGPath(String svgpath)
I'm working on a project that makes use of SVG's. Currently the program has the SVG's stored as SVGPath objects in an FXML file. The file is then loaded into a Group which is then added to the screen. In the FXML file, there are approximately 300 such SVGPaths. Which I believe ultimately means that there are 300 nodes on the scene graph.
I'm going to eventually have to scale up the number of SVGPath and am having concerns about putting more nodes on the scene, so I began to look at using a Cavas/GraphicsContext instead.
GraphicsContext has a method appendSVGPath(String svgpath) that I think I could use to draw my SVGs on the cavas, but am not having any luck getting them to appear.
I'm using the CanvasTest.java file from Oracle as starting point:
http://docs.oracle.com/javafx/2/canvas/jfxpub-canvas.htm
I modified the file to include the following method:
private void appendSVG(GraphicsContext gc) {
SVGPath svg = new SVGPath();
svg.setContent("M 100 100 L 300 100 L 200 300 z");
svg.setFill(Color.RED);
svg.setStroke(Color.BLUE);
gc.appendSVGPath(svg.getContent());
}
But I can't get the shape to appear on the canvas.
Full test code here:
package canvastest;
import javafx.application.Application;
import static javafx.application.Application.launch;
import javafx.scene.Group;
import javafx.scene.Scene;
import javafx.scene.canvas.Canvas;
import javafx.scene.canvas.GraphicsContext;
import javafx.scene.paint.Color;
import javafx.scene.shape.SVGPath;
import javafx.stage.Stage;
public class CanvasTest extends Application {
private Canvas canvas = new Canvas(200, 200);
private GraphicsContext gc = canvas.getGraphicsContext2D();
private Group root = new Group();
public static void main(String[] args) {
launch(args);
}
@Override
public void start(Stage primaryStage) {
primaryStage.setTitle("Canvas Test");
appendSVG(gc);
// SVGPath svg = new SVGPath();
// svg.setContent("M 100 100 L 300 100 L 200 300 z");
// svg.setFill(Color.RED);
// svg.setStroke(Color.BLUE);
root.getChildren().add(root);
primaryStage.setScene(new Scene(root, 400, 400));
primaryStage.show();
}
private void appendSVG(GraphicsContext gc) {
SVGPath svg = new SVGPath();
svg.setContent("M 100 100 L 300 100 L 200 300 z");
svg.setFill(Color.RED);
svg.setStroke(Color.BLUE);
gc.appendSVGPath(svg.getContent());
}
}
If I uncomment out the SVG section from start, and just add the svg to root, the svg will display.
Has anyone had any success using appendSVGPath?
A:
Canvas isn't like the scene graph, stroking and filling paths does not happen automatically. Instead you need to feed your path segments to the canvas, then explicitly call fill() or stroke() to have those operations applied. For more information, see the "path rendering" section at the front of the GraphicsContext javadoc.
import javafx.application.Application;
import javafx.scene.*;
import javafx.scene.canvas.*;
import javafx.scene.paint.Color;
import javafx.stage.Stage;
public class CanvasTest extends Application {
private Canvas canvas = new Canvas(200, 200);
private GraphicsContext gc = canvas.getGraphicsContext2D();
public static void main(String[] args) {
launch(args);
}
@Override
public void start(Stage stage) {
appendSVG(gc);
stage.setScene(new Scene(new Group(canvas)));
stage.show();
}
private void appendSVG(GraphicsContext gc) {
gc.setFill(Color.RED);
gc.setStroke(Color.BLUE);
gc.appendSVGPath("M 50 50 L 150 50 L 100 150 z");
gc.fill();
gc.stroke();
}
}
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I install the Nvidia driver for a GeForce GT 630
I recently installed 14.04.
But now I need a new driver for my nVidia GeForce GT 630. The former driver was rejected as not compatible with the 64-bit. I found that other driver and when I wanted to install it in the terminal with sh I was called first to stop the x-server. It cannot be installed with running x-server.
So how do I install them?
A:
You can download the driver for your video card for Ubuntu 64bit from here. Assuming that you are using Ubuntu 64bit now. If you installed Ubuntu 32 bit, there is 331 version of the same driver for Ubuntu 32bit. Save your driver somewhere where you can easily access it, like your user home directory or inside a newly created nvidia directory in your user home directory.
To be able to install your nvidia driver you have to remove your previous video driver with this code in a terminal window:
sudo apt-get remove nvidia* && sudo apt-get autoremove
After you finish with this one, you should also blacklist the nouveau driver by editing this file with either:
gksudo gedit /etc/modprobe.d/blacklist-nouveau.conf
or
sudo nano /etc/modprobe.d/blacklist-nouveau.conf
…and add these lines at the end:
blacklist nouveau
blacklist lbm-nouveau
options nouveau modeset=0
alias nouveau off
alias lbm-nouveau off
If, by any chance, there is no blacklist-nouveau.conf present in /etc/modprobe.d/, you can save your file as blacklist-nouveau.conf when prompted.
And you can also disable the Kernel Nouveau by typing these lines in a terminal window:
echo options nouveau modeset=0 | sudo tee -a /etc/modprobe.d/nouveau-kms.conf
and after that
update-initramfs -u
Now you can reboot your computer, and when you get to the login prompt, press Ctrl+Alt+F1 to exit to the terminal console. Login with your username and password.
Go to the directory where you saved your nvidia driver using the command cd in the terminal console. Eg. cd nvidia considering that you are already in your user home directory after you login. You can use command dir to be able to see your exact driver's name.
To stop your display manager or the X server, you can type in the console this code:
sudo stop lightdm or
sudo lightdm stop
If you are not using lightdm as your default display manager (DM), replace lightdm with your default display manager, which can be either kdm or gdm or whatever your display manager is.
You should get a message in the terminal console saying --> lightdm stopped/waiting
And now you can finally install the nvidia driver using a code similar to this one:
sudo sh NVIDIA-Linux-x86_64.....run (for Ubuntu 64bit)
or
sudo sh NVIDIA-Linux-x86.....run (for Ubuntu 32bit)
If you don't type the exact name of the driver, you'll get this message: NVIDIA-Linux... could not be found and you should type again the code for installing the driver.
Nvidia installer automatically installs the driver, and at the end it will ask you whether you want to save your new X configuration. Press Yes. After reboot and getting to your desktop and changing your NVIDIA settings as you please you should open a terminal window and type in this code:
sudo nvidia-xconfig
to save your new nvidia configuration in /etc/X11/xorg.conf.
Note
You might need to install some extra software packages if nvidia installer gives an error and prompts for missing dependencies:
sudo apt-get install dkms fakeroot build-essential linux-headers-generic
But you need to install all these missing packages only if nvidia-installer can't do the job by itself.
It can happen that after reboot your system shows a black screen or enters the low graphics mode. To fix this you should exit again to the console terminal, login with your username and password, and use the code provided above sudo nvidia-xconfig and also make use of the following tutorial. It is meant to fix the greeter assuming that they haven't fixed this bug in Ubuntu 14.04.
A:
Since most of these answers are outdated... Here is modern way to install the nvidia drivers for Ubuntu (for 14.04 and newer):
Add the graphics-drivers ppa
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update
Install the recommended driver
sudo ubuntu-drivers autoinstall
Restart your system
sudo reboot
To select a different driver, or if the above doesn't work:
Add the graphics-drivers ppa
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update
Purge any existing nvidia related packages you have installed
sudo apt-get purge nvidia*
Check which drivers are available for your system
ubuntu-drivers devices
Install the recommended driver
sudo apt-get install nvidia-361
Restart your system
sudo reboot
A:
You can install Ubuntu drivers , for GT 630 you can use: sudo apt-get install nvidia-304 OR sudo apt-get install nvidia-304-updates, not both.
| {
"pile_set_name": "StackExchange"
} |
Q:
Database doesn't receive file extension
I'm creating an profile picture upload system in PHP & MySQL. Everything works fine, except for the fact that the database does not include the extension of $db_file_name. How ever, when I echo the variable, it does show the extension in the string.
Also when I echo the sql, it does show the extension. So the problem is MySQL related.
Any idea's what the problem might be?
This is the query:
$db_file_name = "123456.jpg";
// Move result into database
$sql = "UPDATE users SET avatar='$db_file_name' WHERE email='$s_email' LIMIT 1";
$query = mysqli_query($this->db, $sql);
The database received the number without the extension (.jpg, .gif, .png). The datatype of the avatar column is VARCHAR(255) with a default value of NULL.
A:
Alright, I found the problem.
The collation of the column was latin1_swedish_ci.
Once I set it to utf8_general_ci, the problem was solved.
| {
"pile_set_name": "StackExchange"
} |
Q:
Does "être chocolat" exist?
I think I have read following expression somewhere:
J'étais chocolat.
Is this an existing expression (so I remember correctly)? What does it mean?
A:
Yes, such an expression exists in French but it is not mainstream and won't be understood by everyone.
It means to be duped, to fail something, see for example this page and wikipedia.
Beware that the same expression might be used with different meanings, for example to tell what hot beverage you like :
– Tu veux un café ?
– Non merci, moi je suis chocolat.
It might also be used to tell you wear something color chocolate, or that your hairs or your skin has the same color.
| {
"pile_set_name": "StackExchange"
} |
Q:
less css variable defined by function multiple executions
I have a css variable being defined by a randomizer function, I need this variable to generate a random background color from the list, every time I enter the page:
@all-background-colors: "#7B8AA8,#00ADD5,#E93C43,#FC8383";
@background-color: color(~`@{all-background-colors}.split(',')[Math.floor(Math.random()*@{all-background-colors}.split(',').length)]`);
However it seems that the function gets executed every time I use this variable in my css - resulting in many different colors being used across the web page.
is there any way to escape this and transform the variable to a string one after it's defined by the function?
A:
Wrapping the generating code in a mixin, and then calling that mixin once seems to have resolved the issue. So this:
@all-background-colors: "#7B8AA8,#00ADD5,#E93C43,#FC8383";
.generateColor() { /* define mixin */
@background-color: color(~`@{all-background-colors}.split(',')[Math.floor(Math.random()*@{all-background-colors}.split(',').length)]`);
}
.generateColor(); /* call the mixin which sets the variable once */
.test1 {
color-fixed: @background-color;
}
.test2 {
color-fixed: @background-color;
}
.test3 {
color-fixed: @background-color;
}
Which for me produced this consistent test css:
.test1 {
color-fixed: #7b8aa8;
}
.test2 {
color-fixed: #7b8aa8;
}
.test3 {
color-fixed: #7b8aa8;
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Where does the belief that the Earth is relatively young (6000 years) come from?
Some christians believe the Earth is only 6000 to 12,000 years old. Where does this age originate from?
A:
No, this is not a universal belief. Some do not take the days in Genesis as literal, and some treat Paradise (as described in Genesis) as being a spiritual world rather than being the same 'world' which we measure with carbon dating and other physical measures.
The 6000 years is roughly discerned using clues from stories in the Bible, but there are other measures placing its age closer to 7500 years.
What is generally used is the following:
Length of 'day' in Genesis 1
Length of time figures in Genesis lived
Ages are added by using 'and in so-and-so's xth year, so-and-so was born' (and variations thereof)
History / chronicle books have king-dynasty lengths.
It's a lot of work to do, really.
Variations in measures can be explained by differing interpretations of 'day' in Genesis, whether 'in the xth year of y' means that x years preceeded it or x-1 years, and estimates of times not explicitly mentioned.
A:
The "young earth" figure comes from treating the descriptions of the creation of the earth in Genesis as a literal, continuous, description of earth history. It's fairly simple to do the calculation, adding up the ages of each person described, and concluding how long ago Adam and Eve happened. It was most famously done by Archbishop James Ussher.
Most Christians do not believe that the earth is that age, but a substantial minority (mainly in the US) do. See Wikipedia.
The discussion of the evidences for and against is far too complex to give even a summary here.
A:
I'm a young earth creationist.
No, not all Christians maintain this belief. Christians can identify themselves with two different categories, Young Earth Creationist (YEC) or Old Earth Creationist (OEC).
As a YEC believe that God is more then capable of creating everything in six days and that the earth is less than 10,000 years old. As a YEC my argument originates from the following beliefs.
The flood event
The Bible talks about a huge flood about ~4,500 years ago. IF you look for the evidence of this flood you'll see that there's actually a lot of evidence. Walt Brown has done some great research on this topic. He holds true to scripture and does not deny any verses.
The reason that I mention this flood, is because the flood did some major devastation to this planet when it occurred. In young earth creationist POV we see that the sediment layers, massive fossil deposits and more where created because of this flood. The continental shelves and the oceans were also created because of the flood event.
Chemical dating
The only evidence that one has of an earth that's millions of years old is chemical dating. One thing that has recently been learned about chemical dating, is that if things are left in water then that thing's chemical dates will show the thing to be much older then it actually is.
Well, if all fossilized animals that we're finding today were put there due to a large flood, then it would make sense that the chemical dating shows those fossils to be older then what they actually are.
Population
More evidence of young earth is given by our current population numbers. If we're the product of millions of years of evolution then we would see an obvious over population issue. But we don't see that, have you ever driven through Wyoming? We're not overpopulated. The entire human race could fit into the state of Virginia with room.
If you do a backwards calculation of our current population you find a small group of people ~4k to 6k years ago.
Planetary decay
I use the word decay very vagly here. The moon is drifting away from our planet. One million years ago and the moon would have been touching the earth. The sun is shrinking because of the constant amount of gases being burned. One million years ago and the sun would have been so hot that no life could have existed on earth. The planet is losing is magnetic field. People chalk this up to some kind of polar shift, which has never been seen or reproduced on any magnetic, ever.
We have NEVER witnessed a single star forming, know of one that has recently formed or even know how it would be possible. Scientists calculate that it would be impossible for debris to crush together to form a planet. They have no idea how this could happen. Granite is the same way, there's no way to create granite, it's impossible to have formed from molten lava.
conclusion
IMHO, those of weak faith believe in man and those with stronger faith believe God. To answer your question, THIS IS WHY some Christians believe in young earth.
When voting on this comment do not downvote because you disagree, as this is not an arguement as to whether or not I'm right. This is merely an answer to the question "Do I maintain a belief that the earth is young, and why."
I'M NOT DEBATING WHETHER OR NOT THESE ARE FACTS.
Again.. I'm not stating these as facts, I'm stating them as my beliefs as a YEC.
| {
"pile_set_name": "StackExchange"
} |
Q:
PHP Page timeout when aborting readfile
I have an simple php view which looks like this:
header('Content-Type: image/png');
readfile($this->image);
exit();
In this example $this->image is for example /data/pictures/thumbs/x/xyz.png.
On an Index HTML I load about 20 products and display their product image with the code above:
<img src="views/showimage.php?id=100"/>
When I now load the index page with the products, it tooks a short time to load every thumbnail image of the product. If I wait until loading is completed, I can click on each other button on the page and it loads the clicked content.
If I dont wait until the page /images are beeing loaded, I can click on each other button and the browser will run into a timeout. After this I have to clean my cookies and can do it again.
EDIT:
I figured out, that its possible to load the page for 2 times, even though I dont let him complete loading. On the third load, and clicking on navigation before page loads completed, it rans into timeout...
This is not an browser problem!
Whos having any idea?
PHP Log is empty.
A:
After discussion we've found the solution.
Root of problem was in excessive data in images. Each image had dimensions of around 40x27 pixels and size more than 1 Mb.
Adding stripImage() into part that makes thumbnails stripped away excessive data (which seems to be color profile) and decreased file size to several kb.
| {
"pile_set_name": "StackExchange"
} |
Q:
How I should make a bash script to run a C++ program?
I have a C++ program and its command to run in linux terminal is:
./executable file input.txt parameter output.txt
I want to make a bash script for it, but I cannot. I tried this one:
#!/bin/bash
file_name=$(echo $1|sed 's/\(.*\)\.cpp/\1/')
g++ -o $file_name.out $1
if [[ $? -eq 0 ]]; then
./$file_name.out
fi
but it is not right, because it does not get input and also numerical parameter. Thanks in advance.
A:
This script assumes the first argument is the source file name and that it's a .cpp file. Error handling emitted for brevity.
#!/bin/bash
#set -x
CC=g++
CFLAGS=-O
input_file=$1
shift # pull off first arg
args="$*"
filename=${input_file%%.cpp}
$CC -o $filename.out $CFLAGS $input_file
rc=$?
if [[ $rc -eq 0 ]]; then
./$filename.out $args
exit $?
fi
exit $rc
So, for example running the script "doit" with the arguments "myprogram.cpp input.txt parameter output.txt" we see:
% bash -x ./doit myprogram.cpp input.txt parameter output.txt
+ set -x
+ CC=g++
+ CFLAGS=-O
+ input_file=myprogram.cpp
+ shift
+ args='input.txt parameter output.txt'
+ filename=myprogram
+ g++ -o myprogram.out -O myprogram.cpp
+ rc=0
+ [[ 0 -eq 0 ]]
+ ./myprogram.out input.txt parameter output.txt
+ exit 0
| {
"pile_set_name": "StackExchange"
} |
Q:
Teamwork in Unity
I have a Unity project without any version control, and I need to share it with another developer so that both of us can work on the project.
What strategies should be use that play nice with Unity Assets?
A:
Unity has a built in facility for supporting version control properly.
Just go into the File->Project Settings->Editor and enable external version control.
A:
I recommend using Git, it's free and the best around.
A while ago I wrote about version control (using Git) on my blog
Long story short:
Enable external version control File->Project Settings->Editor and create the .gitignore file in order to avoid unnecessary stuff on the repo (this is not really necessary, but it will be priceless during development).
Here's how the file should look like:
[Oo]bj/
[Tt]emp/
[Ll]ibrary
#These are files in the root folder of the project
*.tmproj
*.csproj
*.unityproj
*.sln
*.suo
*.user
*.pidb
*.userprefs
A:
Unity 3.0 is configured to play nicely with subversion. (At-least nicer than before) I don't know if this is only in the pro version or not, I'll have to check.
In general though, the most recommend version control system is the Unity Asset Server.
| {
"pile_set_name": "StackExchange"
} |
Q:
JQUERY - Trim Function Not Working
The trim function does not work correctly
<input class="input"></input>
<div class="button">CLICK</div>
$(".button").click(function() {
var name = $( ".input" ).val();
name = $.trim(name);
console.log("TRIM " + name);
});
http://jsfiddle.net/5sufd9jj/
A:
Trim removes whitespace from the beginning and end of a string.
If you want to remove consecutive spaces such as 'string string', use the following:
$.trim(name.replace(/\s+/g, ' '));
Updated Example
$(".button").on('click', function() {
var name = $.trim($('input').val().replace(/\s+/g, ' '));
console.log("TRIM " + name);
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<input class="input"></input>
<div class="button">CLICK</div>
A:
It is working all right.
trim function removes all newlines, spaces (including non-breaking spaces), and tabs from the beginning and end of the supplied string.
It DOES NOT remove spaces from the middle.
| {
"pile_set_name": "StackExchange"
} |
Q:
Sort documents in Mongodb query by dates closest to a field
I have a collection of documents in mongodb that represent past and upcoming events. There are two date fields in the document, 'start' and 'end' that are bson ISODATE objects. I am doing a find query that gets all of the events that have ended no more than 3 days ago.
db.events.find({'end': {'$gte': datetime.utcnow() - timedelta(days=3)})
How can I sort the response of this query based on the time (and date) between today and the end datetime of the events. In other words, events that ended 2 days ago should occur at approximately the same position as events that will occur 2 days from now. This is important because I do not want to first display all the events that already occurred nor do I want to display events that are happening furthest in the future first either.
A:
If I understand correctly, you want to sort by the difference in time between today and the end date of the event. So if you had one event tomorrow, one event yesterday and one event the day before, the order you want them in are:
event tomorrow
event yesterday
event day before yesterday
Where presumably, 1 and 2 are interchangeable depending on the actual time difference.
This can definitely be done with MongoDB's aggregation framework. It works something as follows:
Match the documents that ended within a certain time period.
Project the difference between today and the end time of each event in a new field (say timeDelta).
Because MongoDB does not have an absolute operator for the aggregation framework, do some condition checking to get the absolute value of the time delta and project it to a new field, say absTimeDelta.
Sort by absTimeDelta.
So it should look something like this:
db.events.aggregate([
{ '$match':
{ 'end':
{ '$gte': datetime.utcnow() - timedelta(days=3) }
}
},
{ '$project':
{
'timeDelta': {
'$subtract': ['$end', datetime.utcnow()]
}
}
},
{ '$project':
{
'absTimeDelta' : {
'$cond' : [
{ '$lte': ['$timeDelta', 0] },
{ '$multiply' : ['$timeDelta', -1 ] },
'$timeDelta'
]
}
}
},
{ '$sort':
{
'absTimeDelta' : 1
}
}
])
EDIT: In MongoDB version 3.2, this JIRA ticket has been fixed, introducing the $abs operator. This means that the second $project can be removed and the first one can be updated. Instead of '$subtract': ['$end', datetime.utcnow()] you can use $abs: { '$subtract': ['$end', datetime.utcnow()] }.
| {
"pile_set_name": "StackExchange"
} |
Q:
Is OpenGL multithreaded?
There are libraries that underhood work using multiple threads. Also there are libraries that are thread safe - objects support usage running in multiple threads.
What about OpenGL? Is it multithreaded? Is it thread safe?
A:
If depends on what you mean by "multithreaded".
If you are thinking about C++ feature like (sharing memory, using locks, etc), then no, OpenGL does not work that way. But this doesn't mean than you can not use threads. You can, with special care.
The main thing to be aware of is the context. You can have several contexts. You can set as current any context for any thread, but only one can be set as current for a thread, not two contexts for the same thread.
Using shared contexts gives you a bit of advantage with multithreading. They share some resources like textures and VBOs. For example, you set ctx1 as current for a thread and use gl-commands in that thread to update a texture to the GPU. Once the update is finished that texture is available for the shared context ctx2 set as current in other thread. The OGL wiki tells about this here and here.
Being that said, the question is "why do I need mutlthreading?" The common answer is "to make things happen faster". The point is that the GPU will draw step by step (using all of its parallelism, of course) but will NOT process two draw commands at the same time. Also, setting a context as current has a light perfomance penalty.
What you likely are looking for is sending data to GPU while it's rendering. You can use shared contexts as I wrote before. But there are other technics like streaming, you can read more at OpenGL Insights book Chapter 28 "Asynchronous Buffer Transfers".
| {
"pile_set_name": "StackExchange"
} |
Q:
PHP Read CSS @import files
If I have a CSS file (main.css) with the following content:
@import url('../styles/global/reset.css');
@import url('/styles/themes/helloWorld/structure.css');
@import url('/styles/themes/helloWorld/presentation.css');
Is it possible to create a PHP file that is give the location of main.css and have it read all of the imported files?
I assume, I start with something like this:
$handle = fopen($_SERVER['DOCUMENT_ROOT'] . '/styles/themes/helloWorld/main.css', 'r');
A:
Try PHP CSS Parser - https://github.com/sabberworm/PHP-CSS-Parser
$oCssParser = new CSSParser(file_get_contents('somefile.css'));
$oCssDocument = $oCssParser->parse();
| {
"pile_set_name": "StackExchange"
} |
Q:
Populating anchor link from several textfields
i try to populate several textfields to become something like this <a href="check.php?name=john doe&group=metallica">Check</a>
and below is my script that i modified from here but this script is only show inside textarea and how to show inside anchor link and generate something like <a href="check.php?name=john doe&group=metallica">Check</a>.
<div id="textBoxContainer">
<input type="text" id="name" onkeyup="UpdateTextArea();" name="name" />
<input type="text" id="group" onkeyup="UpdateTextArea();" name="group" />
</div>
<textarea id="textAreaResult"></textarea>
<a href="check.php?" id="link_check">Check</a>
<script type="text/javascript">
function UpdateTextArea() {
var textBoxContainerDiv = document.getElementById("textBoxContainer");
var textboxes = textBoxContainerDiv.getElementsByTagName("input");
var finalResult = "";
var textAreaFinalResult = document.getElementById("textAreaResult");
for (var i = 0; i < textboxes.length; i++) {
finalResult = finalResult + textboxes[i].id + "=" + textboxes[i].value + "&";
}
textAreaFinalResult.value = finalResult;
}
</script>
A:
If you want to target an anchor tag instead of the textarea, then obviously you need to change that accordingly:
<a id="hrefResult" href="check.php">link</a>
Then in your updateTextArea function, you need to target the anchor's href rather than the textarea's value. There's several other optimizations that can be made in there also, such as making an array of name/value pairs and then doing a join on them so you don't end up with the trailing &, etc.
function UpdateTextArea() {
var textBoxContainerDiv = document.getElementById("textBoxContainer");
var textboxes = textBoxContainerDiv.getElementsByTagName("input");
var length = textboxes.length;
var target = document.getElementById("hrefResult");
var params = [];
var baseref = 'check.php';
for (var i = 0; i < length; i++) {
params.push(textboxes[i].id + "=" + textboxes[i].value);
}
target.href = baseref + '?' + params.join('&');
}
What the heck, here's a fiddle of the whole thing.
It would probably make sense to change the name of the function itself since there's no textarea involved anymore, but I leave that as something for you to do.
| {
"pile_set_name": "StackExchange"
} |
Q:
Can Netty be integrated with ZeroMQ?
ZeroMQ (0MQ) seems to be a framework for different threads ("workers") to communicate with each other. Since Netty is an asynchronous/concurrent network programming framework, I'm wondering if there is a way to use 0MQ on the server-side with Netty for better or more reliable performance. In other words, does a server-side integration between Netty and 0MQ make sense? If not, why? If so, how?
A:
Spotify has open sourced a library netty-zmtp (https://github.com/spotify/netty-zmtp) that can help.
| {
"pile_set_name": "StackExchange"
} |
Q:
Data aggregation mongodb vs mysql
I am currently researching on a backend to use for a project with demanding data aggregation requirements. The main project requirements are the following.
Store millions of records for each user. Users might have more than 1 million entries per year so even with 100 users we are talking about 100 million entries per year.
Data aggregation on those entries must be performed on the fly. The users need to be able to filter on the entries by a ton of available filters and then present summaries (totals , averages e.t.c) and graphs on the results. Obviously I cannot precalculate any of the aggregation results because the filter combinations (and thus the result sets) are huge.
Users are going to have access on their own data only but it would be nice if anonymous stats could be calculated for all the data.
The data is going to be most of the time in batch. e.g the user will upload the data every day and it could like 3000 records. In some later version there could be automated programs that upload every few minutes in smaller batches of 100 items for example.
I made a simple test of creating a table with 1 million rows and performing a simple sum of 1 column both in mongodb and in mysql and the performance difference was huge. I do not remember the exact numbers but it was something like mysql = 200ms , mongodb = 20 sec.
I have also made the test with couchdb and had much worse results.
What seems promising speed wise is cassandra which I was very enthusiastic about when I first discovered it. However the documentation is scarce and I haven't found any solid examples on how to perform sums and other aggregate functions on the data. Is that possible ?
As it seems from my test (Maybe I have done something wrong) with the current performance its impossible to use mongodb for such a project although the automated sharding functionality seems like a perfect fit for it.
Does anybody have experience with data aggregation in mongodb or have any insights that might be of help for the implementation of the project ?
Thanks,
Dimitris
A:
If you're looking for a very high performance DBMS and don't need it to be relational, you might consider Cassandra - although its advantages only come into play if you have a database cluster instead of a single node.
You didn't say what limits there are on the physical architecture. You did mention sharding which implies a cluster. IIRC MySQL clusters support sharding too.
It'd also be very useful to know what level of concurrency the system is intended to support, and how data would be added (drip-feed or batch).
You say "Obviously I cannot precalculate any of the aggregation results because the filter combinations (and thus the result sets) are huge."
This is your biggest problem, and will be the most important factor in determining the performance of your system. Sure, you can't maintain materialized views of every possible combination, but your biggest performance win is going to be maintaining limited pre-aggregated views and building an optimizer that can find the nearest match. It's not all that hard.
C.
A:
I've never been impressed by the performance of MongoDB in use cases where javascript is required, for instance map-reduce-jobs. Maybe it is better in 1.51. I didn't try.
You could also try the free single node edition of Greenplum: http://www.greenplum.com/products/single-node/ and http://www.dbms2.com/2009/10/19/greenplum-free-single-node-edition/
| {
"pile_set_name": "StackExchange"
} |
Q:
Why isn't it possible to build a car moved by wind power?
So, first of all, I am a High School student and I wanted you to explain me something (in a more "concrete" way, if you know what I mean): I had an idea the other day to make a car with a wind turbine attached to it, so that the more the car runs, the more wind gets into the turbine and more energy is generated, wich powers the car. I know this is some kind of "moto perpetuo" so it wouldn't work anyway, so I asked my physics teacher about it and he said It cannot work because of the first law of thermodynamics, and he asked me a way to calculate the energy generated by the turbine. I just wanted a concrete way of showing it doesn't work, like, using data from wind turbines that exist, cars that exist and so on. But how do I calculate the energy generated?
A:
The fundamental problem in this case is that the turbine in not powered by 'wind' as such but by the relative motion of the car through the air.
This is not 'free' energy, because the turbine must do work to generate energy it must also exert a net force opposing the motion of the car ie drag. So any energy you generate with the turbine must ultimately be provided by the car's engine.
Also no matter how well designed the turbine is some energy is wasted (second law of thermodynamics) so you will always be worse off.
Having said that there are some similar situations where you can improve the efficiency of a car with additional equipment. The difference it that, to work they much somehow harvest energy which would otherwise be wasted by the system.
One well established method is a turbocharger. This is a turbine powered not by air flowing over the body but by exhaust gasses. Conventionally turbos are used to pressurise air inducted into the engine whcih can certainly increase power and potentially improve thermal efficiency. They can also be connected to motor-generators which (in hybrid vehicles at least) can harvest electrical energy from the exhaust (see current F1 engines for more details).
Another example is regenerative braking. Here instead of using friction between brake pads and disks to slow the car down (which converts kinetic energy to heat which is lost to the surroundings) braking is achieved by using the drive train to drive a generator connected to a load (usually a battery) capable of storing energy. Again current F1 technology has used this as part of a strategy to achieve fairly spectacular improvements in thermal efficiency .
Now the turbine method you suggest could be used as a form of regenerative braking as in this case you actually want the drag from the turbine to slow the car down.
In both cases the key point is that you are harvesting energy which would otherwise be wasted and thus improving the overall efficiency of the system.
| {
"pile_set_name": "StackExchange"
} |
Q:
Limit the size of List(Of T) - VB.NET
I am trying to limit the size of my generic list so that after it contains a certain amount of values, it won't add any more.
I am trying to do this using the Capacity property of the List object, but this does not seem to work.
Dim slotDates As New List(Of Date)
slotDates.Capacity = 7
How would people advice limiting the size of a list?
I am trying to avoid checking the size of the List after each object is added.
A:
There is no built-in way to limit the size of a List(Of T). The Capacity property is merely modifying the size of the underyling buffer, not restricting it.
If you want to limit the size of the List, you'll need to create a wrapper which checks for invalid size's. For example
Public Class RestrictedList(Of T)
Private _list as New List(Of T)
Private _limit as Integer
Public Property Limit As Integer
Get
return _limit
End Get
Set
_limit = Value
End Set
End Property
Public Sub Add(T value)
if _list.Count = _limit Then
Throw New InvalidOperationException("List at limit")
End If
_list.Add(value)
End Sub
End Class
A:
There are several different ways to add things to a List<T>: Add, AddRange, Insert, etc.
Consider a solution that inherits from Collection<T>:
Public Class LimitedCollection(Of T)
Inherits System.Collections.ObjectModel.Collection(Of T)
Private _Capacity As Integer
Public Property Capacity() As Integer
Get
Return _Capacity
End Get
Set(ByVal value As Integer)
_Capacity = value
End Set
End Property
Protected Overrides Sub InsertItem(ByVal index As Integer, ByVal item As T)
If Me.Count = Capacity Then
Dim message As String =
String.Format("List cannot hold more than {0} items", Capacity)
Throw New InvalidOperationException(message)
End If
MyBase.InsertItem(index, item)
End Sub
End Class
This way the capacity is respected whether you Add or Insert.
A:
You'll want to derive a new LimitedList and shadow the adding methods. Something like this will get you started.
public class LimitedList<T> : List<T>
{
private int limit;
public LimitedList(int limit)
{
this.limit = limit;
}
public new void Add(T item)
{
if (Count < limit)
base.Add(item);
}
}
Just realised you're in VB, I'll translate shortly
Edit See Jared's for a VB version. I'll leave this here in case someone wants a C# version to get started with.
For what it's worth mine takes a slightly different approach as it extends the List class rather than encapsulating it. Which approach you want to use depends on your situation.
| {
"pile_set_name": "StackExchange"
} |
Q:
Jquery function that would return html() of passed div?
I have tried a couple of ways of doing this but didn't do well. I have a div in my HTML skeleton declared as ("myDiv"). I would like to write a function that would take any passed div as its argument and then return the passed div's html(). I can certainly write something like this and get the div text. But if I have a utility function which does nothing but reads the .html() and returns it then that would be great.
var txt = $("#div").html();
alert(txt);
Here is the function code:
$.fn.getDivHTML = function(obj)
{
var txt = $(obj.html());
return txt;
}
And I am calling the function like this:
$(document).ready(function()
{
var val = $.fn.getHTML("#myDiv");
alert(val);
});
Any help higly be appreciated. Thanks.
A:
For a function like you describe this is the answer:
$.fn.getDivHTML = function(selector)
{
return $(selector).html();
}
In your code you are only passing a selector string, not an object, hence the renamed parameter and the wrapping in $().
You will be able to call this exactly in the way you have in your question. But as one commenter pointed out, what's the point? its a one-liner anyway. All you need to do is replace your doc ready with:
$(document).ready(function()
{
var val = $("#myDiv").html();
alert(val);
});
| {
"pile_set_name": "StackExchange"
} |
Q:
Android 4.4 - create application dir on secondary sd card
I am aware about changes in Access to SD card introduced by Google with Android 4.4. However in my application I need to be able to store data on some removable /secondary sd card.
When I create the application folder (app.xyz.com) on the secondary using default file manager then I am able to create dirs and files inside. But by default such dir dosen't exist on secondary sd card.
So, I would like to create the application specific dir programmatically inside my application…
Do you have any idea how to do this??? Simple file.mkdirs(), even with the correct application related path, doesn’t work. Permission error…
I have spend already two days trying to find a way, without any success
THANKS FOR YOUR HELP!!!
A:
Do you have any idea how to do this?
Use getExternalFilesDirs() (note the plural). If that returns more than one entry, the second and subsequent ones are on removable media. Those directories you can read and write to without any permissions on Android 4.4.
| {
"pile_set_name": "StackExchange"
} |
Q:
Add multiple products to paypal express checkout
For a few hours i'm trying to list into paypal express checkout multiple products. This has to be done in order to increase the customers trust for what they buy.
How i can create the bellow array in order to be reconized by paypal as multiple products?
Listing of 1 product is not a problem. Here is the code:
$requestParams = array(
'RETURNURL' => '***',
'CANCELURL' => '***'
);
$item = array('L_PAYMENTREQUEST_0_NAME0' => 'Test product ',
'L_PAYMENTREQUEST_0_DESC0' => 'Description of my item',
'L_PAYMENTREQUEST_0_AMT0' => '0.01',
'L_PAYMENTREQUEST_0_QTY0' => '1'
);
$orderParams = array(
'PAYMENTREQUEST_0_AMT' => '0.01',
'PAYMENTREQUEST_0_CURRENCYCODE' => 'USD',
'PAYMENTREQUEST_0_ITEMAMT' => '0.01',
'PAYMENTREQUEST_0_SHIPPINGAMT' => '0'
);
$response = $core->paypal->request('SetExpressCheckout',$requestParams + $item + $orderParams);
I have tried lots of combinations like adding keys and values into $item array like that in order to add more products to be listed:
I tried also to add in a similar way keys to $orderParams array but without success.
Either i got errors from paypal api, either the paypal listed only the first product.
$item = array('L_PAYMENTREQUEST_0_NAME0' => 'Test product ',
'L_PAYMENTREQUEST_0_DESC0' => 'Description of my item',
'L_PAYMENTREQUEST_0_AMT0' => '0.01',
'L_PAYMENTREQUEST_0_QTY0' => '1',
'L_PAYMENTREQUEST_1_NAME1' => 'Test product 1',
'L_PAYMENTREQUEST_1_DESC1' => 'Description of my next item',
'L_PAYMENTREQUEST_1_AMT1' => '0.01',
'L_PAYMENTREQUEST_1_QTY1' => '1'
);
This is my first integration, i understand the paypal flow but i can't go over this.
Thanks.
A:
Ok, it was a simple trick to do. For those who might need it:
L_PAYMENTREQUEST_n_NAMEm - "n" is the number of transaction, 0 for 1 single transaction - "m" is the number of the product
$item = array('L_PAYMENTREQUEST_0_NAME0' => 'Test product ', //title of the first product
'L_PAYMENTREQUEST_0_DESC0' => 'Description of my item', //description of the forst product
'L_PAYMENTREQUEST_0_AMT0' => '0.01', //amount first product
'L_PAYMENTREQUEST_0_QTY0' => '1', //qty first product
'L_PAYMENTREQUEST_0_NAME1' => 'Test ', // title of the second product
'L_PAYMENTREQUEST_0_DESC1' => 'Description item',//description of the second product
'L_PAYMENTREQUEST_0_AMT1' => '0.01',//amount second product
'L_PAYMENTREQUEST_0_QTY1' => '1'//qty second product
);
$orderParams = array(
'PAYMENTREQUEST_0_PAYMENTACTION'=>'Sale', //becouse we want to sale something
'PAYMENTREQUEST_0_AMT' => '0.02', //total amount (items amount+shipping..etc)
'PAYMENTREQUEST_0_CURRENCYCODE' => 'USD', //curency code
'PAYMENTREQUEST_0_ITEMAMT' => '0.02', //total amount items, without shipping and other taxes
'PAYMENTREQUEST_0_SHIPPINGAMT' => '0' //the shipping amount, will be 0 coz we sell digital products
);
Above you can see an example for two products.
These keys and values will be send to express checkout api in order to deliver the token.
The vars will be sent with GET.
| {
"pile_set_name": "StackExchange"
} |
Q:
VS15 библиотеки компоновщика
Каждый раз когда делаю новый проект приходится прописывать библиотеки для opengl
glaux.lib
glu32.lib
glui32.lib
glut32.lib
opengl32.lib
Как сделать чтобы эти библиотеки были по умолчанию в компоновщике Visual Studio
A:
Для этого существуют Property Sheets. Я записал gif-ку.
Сначала откройте окно Property Manager в Visual Studio. (Edit->Other windows->Property manager).
Затем нажмите Add New Project Property Sheet.
Введите имя (скажем, "Opengl") и сохраните в какой-нибудь удобной для вас директории (общей для всех проектов).
В дереве Property Manager появится новый Property Sheet "Opengl".
Откройте его свойства и добавьте нужные библиотеки в опции линкера.
Сохраните настройки.
Теперь вы можете использовать полученный Opengl.props во всех новых проектах.
После создания нового проекта, вам надо будет открыть Property Manager и нажать Add Existing Property Sheet.
Вы также можете редактировать дефолтные Property Sheets (например, Microsoft.Cpp.Win32.user) и тогда эти библиотеки будут подключаться ко всем проектам на C++ по умолчанию, но это не рекомендуется.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I download zip file from controller on client side? Spring boot
I have file.zip in DB like BLOB. I want create method in Spring controller for download this file on client side.
@RequestMapping(value = "/downloadResolution/{resolutionId}", method = RequestMethod.GET)
public void downloadResolution(@PathVariable("resolutionId") Long resolutionId, HttpServletResponse response) {
Resolution resolution = resolutionService.findOne(resolutionId);
ResolutionArchive resolutionArchive = resolution.getResolutionArchive();
if (resolutionArchive == null) return;
byte[] archive = resolutionArchive.getArchive();
//this byte[] archive - my zip file from db
}
How can I change this methot In order to download this on client side?
User press download button. Methos get data from DB in byte[] and user can download it.
EDIT
I tried solution of @pleft and it work. and I knew - I use ajax for call method
function downloadResolution(resulutionId) {
$.ajax({
type: 'GET',
dataType: "json",
url: '/downloadResolution/' + resulutionId,
success: function (data) {
},
error: function (xhr, str) {
}
});
}
How realize this if I use ajax?
A:
You can use the OutputStream of your HttpServletResponse to write your archive bytes there.
e.g.
response.setHeader("Content-Disposition", "attachment; filename=file.zip");
response.setHeader("Content-Type", "application/zip");
response.getOutputStream().write(archive);
EDIT
Sample download
@RequestMapping(value = "/downloadResolution/{resolutionId}", method = RequestMethod.GET, produces = MediaType.APPLICATION_OCTET_STREAM_VALUE)
public void downloadResolution(@PathVariable("resolutionId") Long resolutionId, HttpServletResponse response) throws IOException {
String test = "new string test bytes";
response.setHeader("Content-Disposition", "attachment; filename=file.txt");
response.getOutputStream().write(test.getBytes());
}
| {
"pile_set_name": "StackExchange"
} |
Q:
как можно, во время работы программы, подгружать ресурсы из фала .unitypackage?
Во время работы приложение сервер должен прислать AssetBundle и файл с расширением .unitypackage, в котором хранится база данных с маркерами.
Можно ли во время работы приложения распаковать файл .unitypackage и если можно, то как это сделать ?
A:
Файлы типа unitypackage вы можете использовать только в редакторе. Единственное api, работающее с ними - AssetDatabase, класс, доступный только в редакторе unity.
Как вариант решения, вы можете попробовать пересылать информацию о маркерах в текстовом формате, например json, и использовать встроенный JsonUtility для работы с ним. Либо попробовать зашить нужный вам ресурс в AssetBundle.
| {
"pile_set_name": "StackExchange"
} |
Q:
Counting occurrences in a many to many table with Entity framework and LINQ
I have a many to many table that links Students and Courses. This table is called StudentCourses. How can i count the number of occurrences in this table based on the course ID to get the number of students that have a particular course. I'm using Entity Framework and LINQ.
So far all I can think is that i have to go through all the students and loop over all their courses, something like:
public int numberOfCourseStudents(int courseId) {
var students = ApplicationDbContext.Students.Include("Courses").ToList();
var count;
for (int i = 0; i < students.Length; i++) {
if (students.Courses.Id == courseId) {
count ++;
}
}
return count;
}
This seems silly though as i have a this StudentCourses Table and all I need to do is match the courseId with any course in there and then count that and that will give me the int I require.
Is there an simpler / more effective way to do this?
Just doesn't feel right to me.
Thanks
A:
I can think of 2 approaches:
Find the matching course, count the number of students in it
Count the number of students with a course that matches a given Id.
1st approach:
public int CountCourseStudents(int courseId) {
// Added include in order to avoid lazy-load of students prop
var course = context.Courses.Include("Students").Find(courseId);
if(course == null) throw new InvalidOperationException("No matching course found");
return couse.Students.Count;
}
2nd approach:
public int CountCourseStudents(int courseId) {
var courseStudents = context.Students
.Where(student => student.Courses
.Any(course => course.Id == courseId));
return courseStudents.Count();
}
| {
"pile_set_name": "StackExchange"
} |
Q:
Python for-loop counter error
I am attempting to script a short code to figure out the number of days it takes to reach a given principal in the bank due to daily interest. Using my code below does not yield any errors when run in IDLE, but the counter returns 0. Any ideas what I missed?
def main():
# irrelevant code elided by msw, Bal, Int and Tar are numeric
counter = 0
for i in range(0):
if (Bal * Int) == Tar:
print '1'
else:
counter + 1
print counter
A:
I'm not sure what you're getting at with this loop:
for i in range(0):
if (Bal * Int) == Tar:
print '1'
else:
counter + 1
range(0) is an empty list, so the loop won't execute at all.
counter + 1 simply calculates one more than counter, it won't increment counter, you probably mean counter += 1
There's nothing in the loop that changes at each iteration, so if you ever get into it, it will be an infinite loop.
A:
I believe the formula to calculate final balance with interest is:
Final = Principal * ( 1 + interest ) ** interest_period
Assuming I got this correct, then you can find out how many interest periods it will take by:
def how_long(start_money, interest_rate, final_money):
day = 0
money = start_money
while True:
if money >= final_money:
break
day += 1
money = start_money * (1 + interest_rate)**day
return day, money
| {
"pile_set_name": "StackExchange"
} |
Q:
Giving each row a variable when converting an HTML table to CSV
So I'm writing a simple web app that all it does is load a CSV, add an "agree?" checkbox to the end of each row, then downloads the table as CSV.
That downloaded CSV will later be converted to an SQL table, but before that, I need to find a way to give each row a boolean variable based on what the user checked or didn't check.
So here's the JS which is built out of a few functions that load a CSV, add the checkbox I mentioned, then convert it back.
function buildHeaderElement (header) {
const headerEl = document.createElement('thead')
headerEl.append(buildRowElement(header, true))
return headerEl
}
function buildRowElement (row, header) {
const columns = row.split(',')
const rowEl = document.createElement('tr')
for (column of columns) {
const columnEl = document.createElement(`${header ? 'th' : 'td'}`)
columnEl.textContent = column
rowEl.append(columnEl)
}
rowEl.append(provideeColumnAgree(row, header))
return rowEl
}
function provideeColumnAgree(row, header) {
const columnAgree = document.createElement(`${header ? 'th' : 'td'}`)
if(header)
{
columnAgree.textContent = 'Agree?';
}
else
{
const checkboxAgree = document.createElement(`input`)
checkboxAgree.setAttribute("type", "checkbox");
columnAgree.append(checkboxAgree)
}
return columnAgree
}
function populateTable (tableEl, rows) {
const rowEls = [buildHeaderElement(rows.shift())]
for (const row of rows) {
if (!row) { continue }
rowEls.push(buildRowElement(row))
}
tableEl.innerHTML= ''
return tableEl.append(...rowEls)
}
function createSubmitBtn() {
var button = document.createElement("button");
button.innerHTML = "Download CSV";
var body = document.getElementsByTagName("body")[0];
body.appendChild(button);
button.addEventListener ("click", function() {
exportTableToCSV('members.csv')
});
}
function downloadCSV(csv, filename) {
var csvFile;
var downloadLink;
// CSV file
csvFile = new Blob([csv], {type: "text/csv"});
downloadLink = document.createElement("a");
downloadLink.download = filename;
downloadLink.href = window.URL.createObjectURL(csvFile);
downloadLink.style.display = "none";
downloadLink.click();
}
function exportTableToCSV(filename) {
var csv = [];
var rows = document.querySelectorAll("table tr");
for (var i = 0; i < rows.length; i++) {
var row = [], cols = rows[i].querySelectorAll("td, th");
for (var j = 0; j < cols.length; j++)
row.push(cols[j].innerText);
csv.push(row.join(","));
}
// Download CSV file
downloadCSV(csv.join("\n"), filename);
}
function readSingleFile ({ target: { files } }) {
const file = files[0]
const fileReader = new FileReader()
const status = document.getElementById('status')
if (!file) {
status.textContent = 'No file selected.'
return
}
fileReader.onload = function ({ target: { result: contents } }) {
status.textContent = `File loaded: ${file.name}`
const tableEl = document.getElementById('csvOutput')
const lines = contents.split('\n')
populateTable(tableEl, lines)
status.textContent = `Table built from: ${file.name}`
createSubmitBtn()
}
fileReader.readAsText(file)
}
window.addEventListener('DOMContentLoaded', _ => {
document.getElementById('fileSelect').addEventListener('change', readSingleFile)
})
The HTML is quite simple
<html>
<body>
<input type="file" id="fileSelect"/>
<div id="status">Waiting for CSV file.</div>
<table id="csvOutput"></table>
<script src="script.js"></script>
</body>
</html>
Here's the link to the project: https://jsfiddle.net/95tjsom3/1/
A:
While downloading the csv, you can check whether the column contains checkbox or not. And if it has a checkbox, whether it is checked or not. Then you can alter the contents of that particular column.
function exportTableToCSV(filename) {
var checkboxes = document.getElementsByTagName("input"); // get all checkboxes in the array
var csv = [];
var rows = document.querySelectorAll("table tr");
for (var i = 0; i < rows.length; i++) {
var row = [], cols = rows[i].querySelectorAll("td, th");
for (var j = 0; j < cols.length; j++) {
if(cols[j].innerHTML.includes('<input type="checkbox">')) {
if(checkboxes[i].checked) {
row.push("AGREE");
}
else {
row.push("NOT AGREE");
}
}
else {
row.push(cols[j].innerText);
}
}
csv.push(row.join(","));
}
// Download CSV file
downloadCSV(csv.join("\n"), filename);
}
| {
"pile_set_name": "StackExchange"
} |
Q:
How to get core temperature of haswell i7 cores in i3status
I want to use i3status to display my CPU-Core temperatures (haswell i7). However the setting:
order += "cpu_temperature 1"
#...
cpu_temperature 1{
format = "T: %degree °C"
}
#
doesn't display the correct core temperature. The numbers it shows seem to correspond to the value xsensors shows for temp1, if I change the 1 to 2 above it corresponds to xsensors temp2. Trying 3 or 4 doesn't have any effect. However I want to get the true core temperatures of all 4 cores with i3 status. How can I do this?
A:
i3status
Using i3status I believe you can change your configuration slightly so that it gets the CPU's core temperature directly from /sys by providing a path to its value. So change your rule to something like this:
order += "cpu_temperature 1"
# and more if you like...
# order += "cpu_temperature 2"
#...
cpu_temperature 1 {
format = "T: %degrees °C"
path = "/sys/devices/platform/coretemp.0/temp1_input"
}
# cpu_temperature 2 {
# format = "T: %degrees °C"
# path = "/sys/devices/platform/coretemp.0/temp2_input"
# }
Here are 4 other ways to get your temp:
/proc
$ cat /proc/acpi/thermal_zone/THM0/temperature
temperature: 72 C
acpi
$ acpi -t
Thermal 0: ok, 64.0 degrees C
From the acpi man page:
-t | --thermal
show thermal information
/sys
$ cat /sys/bus/acpi/devices/LNXTHERM\:01/thermal_zone/temp
70000
lm_sensors
If you install the lmsensors package like so:
Fedora/CentOS/RHEL:
$ sudo yum install lm_sensors
Debian/Ubuntu:
$ sudo apt-get install lm-sensors
Detect your hardware:
$ sudo sensors-detect
You can also install the modules manually, for example:
$ sudo modprobe coretemp
$ modprobe i2c-i801
NOTE: The sensor-detect should detect your specific hardware, so you might need to modprobe <my driver> instead for the 2nd command above.
On my system I have the following i2c modules loaded:
$ lsmod | grep i2c
i2c_i801 11088 0
i2c_algo_bit 5205 1 i915
i2c_core 27212 5 i2c_i801,i915,drm_kms_helper,drm,i2c_algo_bit
Now run the sensors app to query the resulting temperatures:
$ sudo sensors
acpitz-virtual-0
Adapter: Virtual device
temp1: +68.0°C (crit = +100.0°C)
thinkpad-isa-0000
Adapter: ISA adapter
fan1: 3831 RPM
temp1: +68.0°C
temp2: +0.0°C
temp3: +0.0°C
temp4: +0.0°C
temp5: +0.0°C
temp6: +0.0°C
temp7: +0.0°C
temp8: +0.0°C
coretemp-isa-0000
Adapter: ISA adapter
Core 0: +56.0°C (high = +95.0°C, crit = +105.0°C)
coretemp-isa-0002
Adapter: ISA adapter
Core 2: +57.0°C (high = +95.0°C, crit = +105.0°C)
This is on my Thinkpad T410 which has i5 M560. Here's one of the cores:
$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 37
model name : Intel(R) Core(TM) i5 CPU M 560 @ 2.67GHz
stepping : 5
cpu MHz : 1199.000
cache size : 3072 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid
bogomips : 5319.22
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
| {
"pile_set_name": "StackExchange"
} |
Q:
Efficient uniqueness check on large dataset
I'm refactoring a health monitoring system which requires that certain attributes of an Entity have to be unique across the system. The attributes of an Entity are configurable by the end-user and the user can pick one or more attributes to be unique (either "universally" unique or unique across a geographical area).
Currently, the solution performs very poorly when looking up these unique values (we use Postgres). By using Postgres partial indexes mitigates the performance issue,but, on large datasets (500 millions rows, which is not unusual) the performance is not acceptable.
One solution I'm considering is to hash the attribute + value using a trigger before INSERT and UPDATE. The trigger would check this "hashes" unique-index before allowing the INSERT. If the hash is missing, then it inserts. Otherwise it blocks the operation.
Is there a better solution to this problem, considering the size of the dataset?
Edit:
Following @JimmyJames suggestion (use a Bloom index), I did run some tests to verify which index is faster for a direct lookup.
Env: Postgres 12, 64Gb ram, 16 cores AMD
First I have created 500 millions pseudo-hashes:
insert into bloom_filter (
hash
)
select
gen_random_uuid()
from generate_series(1, 500000000) s(i);
Created a b-tree index:
CREATE INDEX idx_btree_bar on bloom_filter (hash);
Index creation took ~19 min.
A simple lookup takes 24ms. (milliseconds)
select count(*) from bloom_filter where hash= '99c2b46f-cc36-4249-ae36-f16f047f2962';
Then, I have killed the b-tree index, and created a bloom index:
CREATE EXTENSION bloom;
CREATE INDEX idx_bloom_hash ON bloom_filter USING bloom(hash)
WITH (length=64, col1=4);
Index creation took: 2m 54s
Same lookup query as above takes 1.536 sec., which is significantly more than a b-tree index.
Not surprisingly, an hash index has a similar look-up speed of a b-tree index.
A:
One solution I'm considering is to hash the attribute + value using a trigger before INSERT and UPDATE. The trigger would check this "hashes" unique-index before allowing the INSERT. If the hash is missing, than it inserts it otherwise it blocks the operation.
You should probably consider using a Bloom filter. This is an approach that will tell you for sure if an element is not in the set. It cannot tell you for sure if the element is in the set. Here's a good interactive page for learning more about the concept.
Postgres has support for bloom indexes. I would encourage you to explore this before building your own solution.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to get list of URLs for a domain
I would like to generate a list of URLs for a domain but I would rather save bandwidth by not crawling the domain myself. So is there a way to use existing crawled data?
One solution I thought of would be to do a Yahoo site search, which lets me download the first 1000 results in TSV format. However to get all the records I would have to scrape the search results. Google also supports site search but doesn't offer an easy way to download the data.
Can you think of a better way that would work with most (if not all) websites?
thanks,
Richard
A:
You can download a list of up to 500 URLs free through this online tool:
XML Sitemap Generator
...Just select "text list" after the tool crawls your site.
A:
Seems there is no royal way to web crawling, so I will just stick to my current approach...
Also I found most search engines only expose the first 1000 results anyway.
| {
"pile_set_name": "StackExchange"
} |
Q:
How can I see the raw command that is been executed in a ProcessBuilder?
I try to debug a ProcessBuilder in Java.
Is there a way to see the raw command that has been executed in a ProcessBuilder?
I try to execute a curl command.
How can I see the raw command?
I tried an error/output Stream, but that didn't produced any usefull output.
Has anyone an idea off which stream I have to use?
StringBuilder builder = new StringBuilder();
BufferedReader input = new BufferedReader(new InputStreamReader(process.getInputStream()));
while(line = input.readLine() != null){
builder.append(line);
builder.append(" ");
}
String result = builder.toString();
I also have tried this:
ProcessBuilder commandBuilder = new ProcessBuilder("curl", VOPT, AUOPT, AUV, AUT, URL, DURLE, PROJECTS, PID, DURLE, NOTES, NOTESVAL, DURLE, NAME, NAMEVAL);
Process command = commandBuilder.start();
Is there a property in the Process object where I can see the raw command?
Eventually, I have to simulate this command (the paramters are all correct in the ProcessBuilder):
curl -v -H "Authorization: Bearer <myPersonalToken>" https://app.asana.com/api/1.0/tasks --data-urlencode "projects=<projectId>" --data-urlencode "notes=PRTG_Message" --data-urlencode "name=8005"
A:
ProcessBuilder has a 'command()' method which will give you back the argument list it will use to create the process. If you concatenate this with spaces, it should give you the resulting native command.
If that is not sufficient you could add && sleep 10000 and then check the running processes for the actual process being executed.
| {
"pile_set_name": "StackExchange"
} |
Q:
what is history id in user profile gmail api?
got this output what does historyId mean and is it epoch time ?
UserInfo is
{'emailAddress': '[email protected]', 'messagesTotal': 22919, 'threadsTotal': 22016, 'historyId': '1727906'}
code used
from googleapiclient.discovery import build
from httplib2 import Http
from oauth2client import file, client, tools
from dateutil.relativedelta import relativedelta
from datetime import datetime
SCOPES = 'https://www.googleapis.com/auth/gmail.readonly'
userInfo = service.users().getProfile(userId='me').execute()
print ("UserInfo is \n %s" % (userInfo))
A:
Anwser: HistoryID is not epoc time its an id.
Definition of history ID
Directly from the doucmentation getprofile
historyId unsigned long The ID of the mailbox's current history record.
Usage
used with history.list
Lists the history of all changes to the given mailbox. History results are returned in chronological order (increasing historyId).
Also used in the following
Synchronizing Clients with Gmail
Keeping your client synchronized with Gmail is important for most application scenarios. There are two overall synchronization scenarios: full synchronization and partial synchronization. Full synchronization is required the first time your client connects to Gmail and in some other rare scenarios. If your client has recently synchronized, partial synchronization is a lighter-weight alternative to a full sync. You can also use push notifications to trigger partial synchronization in real-time and only when necessary, thereby avoiding needless polling.
You may also find sync
| {
"pile_set_name": "StackExchange"
} |
Q:
ASP.NET MVC 2 and SQL Table Profile Provider
I'm trying to add the sample table profile provider from http://www.asp.net/downloads/sandbox/table-profile-provider-samples to a new MVC 2 site.
After a bit of research and fiddling around, I've arrived at a profile class that looks like this.
namespace MyNamespace.Models
{
public class UserProfile : ProfileBase
{
[SettingsAllowAnonymous(false),CustomProviderData("FirstName;nvarchar")]
public string FirstName
{
get { return base["FirstName"] as string; }
set { base["FirstName"] = value; }
}
[SettingsAllowAnonymous(false),CustomProviderData("LastName;nvarchar")]
public string LastName
{
get { return base["LastName"] as string; }
set { base["LastName"] = value; }
}
public static UserProfile GetUserProfile(string username)
{
return Create(username,false) as UserProfile;
}
public static UserProfile GetUserProfile()
{
return Create(Membership.GetUser().UserName,true) as UserProfile;
}
}
}
And a web.config like
<profile enabled="true" defaultProvider="TableProfileProvider" inherits="MyNamespace.Models.UserProfile">
<providers>
<clear />
<add name="TableProfileProvider" type="Microsoft.Samples.SqlTableProfileProvider" connectionStringName="ContentDB" table="aspnet_UserProfiles" applicationName="/"/>
</providers>
</profile>
Things I think I've found out along the way are
Using a custom provider with MVC requires the "inherits" attribute on the <profile> element in web.config, which precludes the use of a <properties><add ....> construct with the same profile field name.
The sample SQL table profile provider needs the CustomProviderData attribute and, because of the above, it cannot appear in the web.config file, so needs to be added as an attribute to the properties in the profile class.
It all seems to work OK once a user is logged in. However, I want to capture some profile data as part of the new user registration process, and I cannot seem to access the profile object until the user has logged in.
I've tried adding a call to save profile data in the new user registration section of the MVC template code:
FormsService.SignIn(model.UserName, false /* createPersistentCookie */);
UserProfile profile = UserProfile.GetUserProfile(Membership.GetUser().UserName);
profile.FirstName = "Doug";
Profile.Save();
return RedirectToAction("Index", "Home");
However it seems that Membership.GetUser() is null until the user actually logs in. I also tried using the user's name from the model.
FormsService.SignIn(model.UserName, false /* createPersistentCookie */);
UserProfile profile = UserProfile.GetUserProfile(model.UserName);
profile.FirstName = "Doug";
profile.Save();
return RedirectToAction("Index", "Home");
This gets a bit further, but fails when trying to set the FirstName profile field, with an error message along the lines of "trying to set an attribute as an anonymous user, but this is not allowed" (sorry, don't have access to the exact message as I'm typing this).
Is there any way round this? It looks like the FormsServer.SignIn method does not actually log the user in as far as forms authentication is concerned, and it needs a round trip to be fully logged in, presumably needing the cookie to be submitted back to the server.
If there's no easy way round this I could populate the profile table directly using data access methods (insert into aspnet_UserProfiles ....). Is this a bridge too far, or is it a viable solution?
Hasn't anyone got this problem? No? Just me then!
Just to update, I've tried the suggestion Franci Penov makes in his answer to this posting.
So, now my code looks like this.
FormsService.SignIn(model.UserName, false /* createPersistentCookie */);
GenericIdentity id = new GenericIdentity(model.UserName);
HttpContext.User = new GenericPrincipal(id, null);
UserProfile profile = UserProfile.GetUserProfile(Membership.GetUser().UserName) as UserProfile;
profile.FirstName = "Doug";
profile.Save();
return RedirectToAction("Index", "Home");
Now, at least the call to Membership.GetUser() returns a valid MembershipUser object, but trying to set the FirstName profile property still results in the message This property cannot be set for anonymous users.
So, the user is logged on as far as Membership is concerned, but the Profile system still thinks not.
Any ideas?
A:
Hurrah!
Reading this posting even more closely, I thought I'd try calling the profile Initialize method explicitly, and it worked!
Final full code for the register action method is:
[HttpPost]
public ActionResult Register(RegisterModel model)
{
if (ModelState.IsValid)
{
// Attempt to register the user
MembershipCreateStatus createStatus = MembershipService.CreateUser(model.UserName, model.Password, model.Email);
if (createStatus == MembershipCreateStatus.Success)
{
FormsService.SignIn(model.UserName, false /* createPersistentCookie */);
GenericIdentity id = new GenericIdentity(model.UserName);
HttpContext.User = new GenericPrincipal(id, null);
UserProfile profile = UserProfile.GetUserProfile(Membership.GetUser().UserName) as UserProfile;
profile.Initialize(Membership.GetUser().UserName, true);
profile.FirstName = "Doug";
profile.Save();
return RedirectToAction("Index", "Home");
}
else
{
ModelState.AddModelError("", AccountValidation.ErrorCodeToString(createStatus));
}
}
// If we got this far, something failed, redisplay form
ViewData["PasswordLength"] = MembershipService.MinPasswordLength;
return View(model);
}
Hope this helps.
Doug
| {
"pile_set_name": "StackExchange"
} |
Q:
c# wcf file and folder browser
I have a windows service that host a wcf service to allow remot file and folder browsing. The windows service runs under the local system account.
When browsing the c:\ drive the service reports over 2800 files in that folder.
i have single stepped through the code and it does indeed report >2800 files.
How can this be correct?
C# Code
//Files Manager
public ReturnClass FindSubFiles(String Folder_To_Search, String User, String SessionId)
{
ReturnClass myReturnClass = new ReturnClass(-1, String.Empty, String.Empty, null, null, null, null);
try
{
Logging.Write_To_Log_File("Entry", MethodBase.GetCurrentMethod().Name, "", "", "", "", User, SessionId, 1);
string[] filePaths = Directory.GetFiles(Folder_To_Search);
int count = 0;
foreach (string Folder in filePaths)
{
filePaths[count] = Path.GetFileName(filePaths[count]);
count++;
}
myReturnClass.ErrorCode = 1;
myReturnClass.FilePaths = filePaths;
Logging.Write_To_Log_File("Exit", MethodBase.GetCurrentMethod().Name, "", "", "", "", User, SessionId, 1);
return myReturnClass;
}
catch (Exception ex)
{
Logging.Write_To_Log_File("Error", MethodBase.GetCurrentMethod().Name, "", "", ex.ToString(), "", User, SessionId, 2);
myReturnClass.ErrorCode = -1;
myReturnClass.ErrorMessage = ex.ToString();
return myReturnClass;
}
}
A:
the path i was passing in was c:
what i should be passing in is c:\\
C# Code
public ReturnClass FindSubFiles(String Folder_To_Search ,
String User, String SessionId )
{
ReturnClass myReturnClass = new ReturnClass(-1, String.Empty, String.Empty,
null, null, null, null);
try
{
Logging.Write_To_Log_File("Entry", MethodBase.GetCurrentMethod().Name,
"", "", "", "", User, SessionId, 1);
string[] filePaths = Directory.GetFiles(Folder_To_Search + "\\");
int count = 0;
foreach (string Folder in filePaths)
{
filePaths[count] = Path.GetFileName(filePaths[count]);
count++;
}
myReturnClass.ErrorCode = 1;
myReturnClass.FilePaths = filePaths;
Logging.Write_To_Log_File("Exit", MethodBase.GetCurrentMethod().Name,
"", "", "", "", User, SessionId, 1);
return myReturnClass;
}
catch (Exception ex)
{
Logging.Write_To_Log_File("Error", MethodBase.GetCurrentMethod().Name,
"", "", ex.ToString(), "", User, SessionId, 2);
myReturnClass.ErrorCode = -1;
myReturnClass.ErrorMessage = ex.ToString();
return myReturnClass;
}
}
thanks
Damo
| {
"pile_set_name": "StackExchange"
} |
Q:
What should I be using for an XML parser in my Android App?
I want to create an App that uses a potentially large xml file. It will also modify and ideally be able to traverse in reverse.
I know there is SAX, DOM, and the XML pull parser. The pull parser is out, unless I spend memory on creating my own tree of objects which does not seem feasible.
That leaves SAX and DOM unless there is another parser out there that can do what I want. Highly improbable, I know.
Yes, I saw this answer: https://stackoverflow.com/questions/7498616/which-xml-parser-should-i-use-for-android
Thoughts on having tree like usability without having to use DOM?
A:
There are a lot of options when it comes to parsing XML. But it depends on your own requirements that which parser you can use when. For that you need to know the basic differences between the parser. Here is some basic information i have provided.
SAX parser is one where your code is notified as the parser walks through the XML tree,
and you are responsible for keeping track of state and constructing any objects you might want to keep track of the data as the parser marches through.
DOM parser reads the entire document and builds up an in-memory representation that you can query for different elements. Often, you can even construct XPath queries to pull out particular pieces.
And as you said you are having large file and also if you want faster performance i suggest that you should use StAX parser. Here is link for that.
Hope this will help you...
Also refer this link.
| {
"pile_set_name": "StackExchange"
} |
Q:
QR Code generation in shell / mac terminal
I want to create QR codes for a project I'm working on in applescript the resulting qr will be placed in the indesign document. I have found that there is a plugin for indesign but I suspect that requires user interaction.
So I've been search for how to generate the qr using a shell command. I've found things related to php and rails and even coldfusion but none of those will fit the bill on this. I need to generate them using shell command so image events or perl basically anything I can run from the command line that comes with the mac os
thanks for your help.
antotheer
I wonder if I could call a url using curl or somthing to get one ?
A:
For doing something similar, we use libqrencode.
It's a c library for generating QR codes, but it comes with a command line utility (called qrencode) which lets you generate QR codes from a string, e.g.:
./qrencode -o /tmp/foo.png "This is the input string"
It supports most options you'd probably want (e.g. error correction level, image size, etc.).
We've used it in production for a year or two, with no problems.
I've only run it on linux systems, but there's no reason you shouldn't be able to compile it on Mac OS, assuming you have a compiler and build tools installed (and any libraries it depends on of course).
A:
As Riccardo Cossu mentioned please use homebrew:
brew install qrencode
qrencode -o so.png "http://stackoverflow.com"
| {
"pile_set_name": "StackExchange"
} |
Q:
Would a handheld particle accelerator weapon be feasible?
I saw this on an interesting reddit and wondered if you could replace all the handwave blaster weapons in sci fi with particle accelerator rifles or pistols and what tech would be required to make this possible and what advantages would this give to militaries.
https://www.reddit.com/r/scifi/comments/43wtl7/particle_beams_the_ultimate_hard_scifi_weapon/
I'd like good semi hard sci fi answers with somewhat good reasearch behind them.
Please do not tell me it is not possible, but rather tell me what technologies would be needed to make it possible, these can .
EDIT:For a possible power source, maybe a sort of diamond battery( using synthetic diamonds, real ones are too expensive) or even multiple layers of paper thin graphene super capacitors, or even the new li-ion batteries with graphite anodes made by the university of arizona.
A:
No, for a number of reasons.
gross inefficiency.
charged particles are easily deflected by magnetic fields
charged particles do not travel far at all in air
Of course, if we're talking large enough particles - say, the size of a bullet... but that's cheating.
You might have a weapon using "particle acceleration" if the accelerated particles aren't the energy delivery component (as said above and in @sdfgeoff's excellent answer, if they are you're left with too much bang required for not enough buck).
You can accelerate particles in a handy, shielded, portable vacuum, and have them brake hard against a variety of targets. You get all kinds of radiation and unaccelerated (but often fast enough) particles of several types.
So you can get X rays, several kinds of brehmsstrahlung, even neutrons. Neutron activation and the appropriate isotope gives access to a wide menu of gamma rays.
You don't get a gun, but a fancy handheld assassination weapon is somewhat possible.
Example
With sufficiently strong insulators and power supply, you can further reduce this gadget so that it's no bigger than two fists.
At the same time, it can produce a significant neutron flux; if you put it into a sealed container full of deuterium, it will produce a remarkable neutron flux. Place it somewhere the victim may be exposed - the radiation penetrates easily a wall - and the poor guy is started on the road to several nasty kinds of cancer.
Example
Forget neutrons, accelerate electrons with very high voltages so that they smash against a suitable target at an angle. A predictably shaped cone of ionizing radiation is generated.
With enough current, which requires more advanced technology to manufacture the gadget, you can have the victim fall ill within a few minutes, and die within days of acute radiation sickness.
A:
What tech would be needed? Extremely good power sources.
The muzzle energy of a 0.45 pistol is around 560 joules per shot. An AA battery contains 12,000 joules. Or enough energy for about 20 shots under ideal situations. However, with every mass-based weapon, at least 50% of the energy goes into recoil, and as soon as you introduce the circuitry of a typical coil-gun, you're even lower. This guy measured 2-3% effective efficiency from stored->projectile, so not even the charging. Now if 12,000 joules is all used in one shot at 2% efficiency, we've gone down from a .45 round to a .22 round. Eh, maybe use a lithium cell instead of an alkaline!
Even this is slightly ideal because an AA battery has less capacity at higher currents, so if you want a good refire rate, you'll need more batteries for the same output energy. But at this scale, it's at least feasible. A particle-beam pistol could be a viable weapon. (Note: it is hard to actually do this energy conversion. Coil guns typically aren't run on a single AA battery even if they only output energy in the 1-2Kj region)
Hold on, I asked about particle accelerators and you're talking about coil guns and pistols
Coil guns could be seen as a simpler and slightly more ideal form of particle accelerator. You put all your energy into accelerating the projectile. In a particle accelerator, you first have to ionise it. The projectiles travel faster, but the energy dealt on impact determines the damage done.
Now let's talk explosives. 1kg of tnt is equivalent to 4 million joules of energy. An aim-9 sidewinder missile's warhead weighs 9kg, and I'm willing to be it's got more than tnt inside it. But let's just assume that our aim-9 missile can output some 40Mj of destruction.
Allowing for our 2% efficiency from earlier, we need an electrical source providing 20Tj of energy to deal the same amount of damage. The engine from a 777 airliner outputs 83,164 kW, so to fire our weapon we have to be running that airliner engine for around 67 hours. An airliner burns tonnes of fuel per hour. Needless to say, the extra energy required to manoeuvre those tonnes of fuel vs carrying 10kg of high explosive?
So how about nuclear reactors then? It's very hard to estimate how heavy a reactor is (and in space, it's all about how much it weighs). The RTG's flown on existing spacecraft are measured in the 100's of watts, so you'll be charging your guns for years. However, if we have a pair of nimitz's nuclear reactors (550Mw), then we can squeeze off a shot every twenty seconds. Still, two A4w reactors aren't light-weight, and one shot every 20 seconds is pretty pitiful.
In short: you can stock a lot of missiles for the same mass.
So if you can hand-wave a good powersource that isn't more destructive by itself, then you can have your particle guns. (Is a fusion reactor powered particle accelerator more destructive than a fusion warhead? How about an antimatter reactor vs antimatter warhead?)
So where might a particle accelerator be useable even on the large scale? A large space station will have several large reactors powering it's internal systems (eg heating). In a period of combat, the life support can be turned off (everyone get's into their spacesuits), and they power up some particle accelerators.
| {
"pile_set_name": "StackExchange"
} |
Q:
"Explode" model?
I am working on an engine model and I would like to have the part "explode" out and back in. Like this.
I have almost a 1,000 parts and I really dont want to animate them manually as I will be doing this for many different models.
Does anyone know of a plug in that can automatically take my groups and animate them in this fashion?
A:
I'm not sure if this will help but I found someone with a similar question on blenderartists.org animating an exploded view.
In order to move several objects away enable Manipulate Center Points button in the 3D Viewport header. When scaling or rotating, it will make selected objects change their positions, but not their proportions.
A:
in 2.8 it is now called Transform Affect Only > Locations
to test it, make 2 objects, select these 2 then perform scale it (or hit keyboard 'S' then drag)
| {
"pile_set_name": "StackExchange"
} |
Q:
Python: Numpy combine arrays into 2x1 lists
I'm hoping to combine two arrays
A: ([1,2,5,8])
B: ([4,6,7,9])
to
C: ([[1,4],
[2,6],
[5,7],
[8,9]])
I have tried insert, append and concatenate, they only lump all elements together without giving the dimension in C.
I'm new to Python, any help will be appreciated.
A:
Use numpy.column_stack:
Stack 1-D arrays as columns into a 2-D array
np.column_stack((A, B))
array([[1, 4],
[2, 6],
[5, 7],
[8, 9]])
| {
"pile_set_name": "StackExchange"
} |
Q:
How to check programmatically to which runtime Google Colab notebook is connected?
In Google Colab is there a way to check programmatically which runtime environment I am connected to: local or hosted?
I want to use this as a conditional in the code.
A:
Check sys.modules like so:
import sys
print ('Running in colab:', 'google.colab' in sys.modules)
| {
"pile_set_name": "StackExchange"
} |
Q:
Javascript Ternary operator with empty else
I'm trying to convert the following if-else to it's ternary operator representation in javascript as follows
var x = 2;
if (x === 2) {alert("2");}
else
{ //do nothing}
But when I do this:
(t==2)?(alert("1")):();
Chrome throws a SyntaxError.
My question is -
How to have a ternary operator in javascript with an empty "else" branch - i.e the part that comes after ":".
Also, is this allowed- using a ternary operator in javascript to execute statements - NOT do assignment.
Also: the above code was just a base case. I'm actually trying to get all the DOM elements of the page as an array (called all2) and then add only those elements to another array (called only) only if they have non-null class names. Here is my code:
all2.forEach(function(e){ e.getAttribute("class") ? (only.push(e.getAttribute("class"))) : (); });
If I leave the third operand blank, it throws a syntax error. Passing a null works
A:
Answer to your real question in the comments:
all2.forEach(function (e) {
e.getAttribute("class") && only.push(e.getAttribute("class"));
});
A:
You putted a lot of useless parentheses, and the best NULL value in js is undefined.
document.getElementById('btn-ok').onclick = function(){
var val = document.getElementById('txt-val').value;
val == 2 ? alert(val) : undefined;
}
<input id="txt-val" type="number" />
<button type="button" id="btn-ok">Ok</button>
using a single line if statement is better though
if(value === 2) alert(value);
A:
Do this :
(t==2)?(alert("1")):null;
You could replace null by any expression that has no side effect. () is not a valid expression.
| {
"pile_set_name": "StackExchange"
} |
Q:
Display random number as percentage
I'm studying C# and I want to display the % sign after the random number showed in one label.
My code works well, but show only the number. I want the number with the percent sign:
Random randnum = new Random();
label_showRandNum.Text = randnum.Next(-1, 101).ToString();
I just don't know how to show the number with the percent sign (%).
I've tried to format the label, but without success. I also have a message to be shown when the number chosen is, for example, 10:
int number;
number = Convert.ToInt32(label_showRandNum.Text);
if (number == 10)
{
MessageBox.Show("You have picked 10%!");
}
I think the percent sign will cause an exception when converting the variable.
I do not know what to do. I will apreciate any help.
Thanks in advance!
A:
You have multiple options.
First use Format p or P in ToString like:
Random randnum = new Random();
label_showRandNum.Text = randnum.Next(-1, 101).ToString("p");
But, the problem with this is that, format p results in number multiplied by 100 and then a percentage sign is put next to it.
Standard Numeric Format: "p" or "P"
Number multiplied by 100 and displayed with a percent symbol.
You can use Random.NextDouble method which produces values between 0 to 1, and then use that in your Label.
But the other option is:
You can concatenate the % Percentage sign with your label, and when you are parsing it you can remove it like:
label_showRandNum.Text = randnum.Next(-1, 101).ToString()
+ CultureInfo.CurrentCulture.NumberFormat.PercentSymbol;
This will result in Text holding value like 10%, For en-US culture.
Later when you are parsing the Text value to int you can do:
int number = int.Parse(label_showRandNum.Text.Replace
(CultureInfo.CurrentCulture.NumberFormat.PercentSymbol,
""));
I would rather use CultureInfo.CurrentCulture.NumberFormat.PercentSymbol then hard coding % symbol. As this might differ depending on the culture.
| {
"pile_set_name": "StackExchange"
} |
Q:
Intalion and tasks with a deadline
Unlike Oracle's solution, I'm not so sure but I think Intalio can't handle tasks with a deadline.
What I want is the task to be canceled once 48 hours have passed and to follow a different sequence flow in this case. Just like this.
Is there any way this purpose could be done with Intalio? Thanks
A:
Intalio lets you attach an interrupter timer to a sub-process(group of task in the process flow). When that timer is hit (48 hours in your case) execution jumps out of the sub-process to the next task in line. I think this might be exactly what you are looking for.
Human interaction tasks (like completing a form) also have a deadline option that you can set in the mapper. When the deadline is hit the human interaction task is canceled and the process automatically moves on.
Edit. Added a picture to show how it looks in Intalio designer. When timer is hit process moves on to Task 3. You can also execute a separate stream (optional task in this example)
http://i.stack.imgur.com/2AmrN.png
Hope this helps. Cheers.
| {
"pile_set_name": "StackExchange"
} |
Q:
unable to using winapp
I am unable to run the code
capabilities.setCapability("app", "C:\\Windows\\System32\\calc.exe");
CalculatorSession = new IOSDriver(new URL("http://127.0.0.1:4723"), capabilities);
this code opens the calculator app but it say IOSDriver is wrong
Original code
I changed just two lines of the code
Software: Java, Eclipse, WinappDriver
I am automating Windows application in Windows 10
A:
I solved the above problem updating the driver manual rather than taking the auto-suggestion version in maven
| {
"pile_set_name": "StackExchange"
} |
Q:
REST: Is it considered restful if API sends back two type of response?
We have stock website and we help buyers connect with the sellers. We are creating API to let buyers push their contact details and get back the seller details. This is transaction and get logged in our database. We have created following API:
The request is POST, the URL looks like:
/api/leads
The request body looks like:
{
"buyermobile": "9999999999",
"stockid": "123"
}
The response looks like:
{
"sellermobile" : "8888888888",
"selleraddress": "123 avenue park"
}
We have a new requirement, i.e. we need to send back PDF URL (instead of "sellermobile" & "selleraddress"). This PDF URL would contain the seller details in case it comes from one of our client.
We have modified the same API, now the request body looks like:
{
"buyermobile": "9999999999",
"stockid": "123",
"ispdf": true
}
The response looks like:
{
"sellerdetailspdf" : "https://example.com/sellerdetails-1.pdf",
}
Is it RESTFUL to do this? OR we should create separate API for getting response as PDF?
A:
I wouldn't approach it this way. What happens when you need to add XLS? Do you add "isxls" to the request too?
Things I'd consider:
Use a mime type for content negotiation. Post the same request, and specify in the Accept header what you expect back - JSON, PDF, etc. You're then actually getting the report instead of a link to the report, which may or may not be better.
- or -
Include a link in the typical lead response.
{
"sellermobile" : "8888888888",
"selleraddress": "123 avenue park",
"_links": {
"seller-details-pdf": "https://example.com/sellerdetails-1.pdf"
}
}
- or -
Support a query parameter that specifies the type in the response.
- or -
Have a single property that specifies the type in the response, rather than a boolean. Much cleaner to extend when you add new response types.
The first two options have the bonus that you don't require clients to handle multiple response types to a single request. That's not forbidden by any spec, but it's annoying for clients. Try not to annoy the people who you want to pay you. :)
| {
"pile_set_name": "StackExchange"
} |
Q:
'Snippet Serializer' object is not callable
Having just followed through the Part 5 of the official tutorial, I've run into a problem. The hyperlinked API works very well, expect when I click on a snippet. For instance, in the following:
HTTP 200 OK
Allow: GET, POST, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept
{
"count": 1,
"next": null,
"previous": null,
"results": [
{
"url": "http://localhost:8000/snippets/1/",
"owner": "ankush",
"title": "",
"code": "print 123",
"linenos": false,
"language": "python",
"style": "friendly",
"highlight": "http://localhost:8000/snippets/1/highlight/"
}
]
}
clicking on the url gives me this exception: 'Snippet Serializer' object is not callable. I thought I had copied everything correctly from the tutorial, but apparently I hadn't. The code is here: https://github.com/ankush981/rest-demo
Finally, here's the entire trace:
Environment:
Request Method: GET
Request URL: http://localhost:8000/snippets/1/
Django Version: 1.9.7
Python Version: 3.4.3
Installed Applications:
('rest_framework',
'snippets',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware')
Traceback:
File "/media/common/code/python/django-rest/tutorial/env/lib/python3.4/site-packages/django/core/handlers/base.py" in get_response
149. response = self.process_exception_by_middleware(e, request)
File "/media/common/code/python/django-rest/tutorial/env/lib/python3.4/site-packages/django/core/handlers/base.py" in get_response
147. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/media/common/code/python/django-rest/tutorial/env/lib/python3.4/site-packages/django/views/decorators/csrf.py" in wrapped_view
58. return view_func(*args, **kwargs)
File "/media/common/code/python/django-rest/tutorial/env/lib/python3.4/site-packages/django/views/generic/base.py" in view
68. return self.dispatch(request, *args, **kwargs)
File "/media/common/code/python/django-rest/tutorial/env/lib/python3.4/site-packages/rest_framework/views.py" in dispatch
466. response = self.handle_exception(exc)
File "/media/common/code/python/django-rest/tutorial/env/lib/python3.4/site-packages/rest_framework/views.py" in dispatch
463. response = handler(request, *args, **kwargs)
File "/media/common/code/python/django-rest/tutorial/env/lib/python3.4/site-packages/rest_framework/generics.py" in get
286. return self.retrieve(request, *args, **kwargs)
File "/media/common/code/python/django-rest/tutorial/env/lib/python3.4/site-packages/rest_framework/mixins.py" in retrieve
57. serializer = self.get_serializer(instance)
File "/media/common/code/python/django-rest/tutorial/env/lib/python3.4/site-packages/rest_framework/generics.py" in get_serializer
111. return serializer_class(*args, **kwargs)
Exception Type: TypeError at /snippets/1/
Exception Value: 'SnippetSerializer' object is not callable
A:
ok dear dotslash
i check that code :
Shouldn't this:
class SnippetDetail(generics.RetrieveUpdateDestroyAPIView):
'''Retrieve, update or delete a snippet'''
queryset = Snippet.objects.all()
serializer_class = SnippetSerializer()
permission_classes = (permissions.IsAuthenticatedOrReadOnly, IsOwnerOrReadOnly)
Be that:
class SnippetDetail(generics.RetrieveUpdateDestroyAPIView):
'''Retrieve, update or delete a snippet'''
queryset = Snippet.objects.all()
serializer_class = SnippetSerializer
permission_classes = (permissions.IsAuthenticatedOrReadOnly, IsOwnerOrReadOnly)
| {
"pile_set_name": "StackExchange"
} |
Q:
Does Selenium ever need Thread.Sleep
Beyond for the purpose of temporarily intentionally delaying Selenium at designated points for debugging, is there ever a valid purpose for a Thread.sleep(x) in a run?
This could apply to any language, using the corresponding Thread Sleep functions.
A:
In a word, yes.
Here's one scenario where it was necessary to sleep for a specified number of seconds:
The application could display credit card numbers to a small number of users who had the privilege to view secured data. The display window was designed to log who viewed the data and which transaction was being viewed; and to close itself after a configurable length of time.
My scripts would set the display time to a short enough value that the delay wasn't excessive, then open the secured data. At this point the script would sleep for the length of the display time, then check that the form no longer displayed.
With the tool I was using at the time, this was the simplest and cleanest way to verify that the display time was being honored.
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I redirect all but a single url to https in ASP.Net?
I have the following code in my web.config:
<rewrite>
<rules>
<rule name="Redirect HTTP to HTTPS" stopProcessing="true">
<match url="(.*)" />
<conditions>
<add input="{HTTPS}" pattern="^OFF$" />
</conditions>
<action type="Redirect" url="https://{HTTP_HOST}/{R:1}"
redirectType="SeeOther" />
</rule>
</rules>
</rewrite>
I would like to redirect everything EXCEPT for http://www.mysite.com/manual to https.
How would I modify the above to allow this?
A:
It should be ok with adding the following code in your conditions tag
<add input="{REQUEST_URI}" negate="true" pattern="^/manual/*" ignoreCase="true" />
| {
"pile_set_name": "StackExchange"
} |
Q:
Modify a SendGoogleForm script v3
I have almost got all the way now with my script but now I got stuck on an error telling me "66: Missing semicolon". I have tried with what I can, but I do not understand.
I read about this and the semicolon should be used after a function is assigned to a variable if I understand correct, but the powershell code I put inside the "" should not be affected, right? Maybe I got this all wrong.
/* Send Google Form by Email v2.1 */
/* For customization, contact the developer at [email protected] */
/* Tutorial: http://www.labnol.org/?p=20884 */
function Initialize() {
var triggers = ScriptApp.getProjectTriggers();
for(var i in triggers) {
ScriptApp.deleteTrigger(triggers[i]);
}
ScriptApp.newTrigger("SendGoogleForm")
.forSpreadsheet(SpreadsheetApp.getActiveSpreadsheet())
.onFormSubmit()
.create();
}
function SendGoogleForm(e)
{
try
{
//Här fyller du i mailadresserna för resp avdelning.
var it = "";
//Ärende på mailet
var subject = "Ny/redigerad anställning";
// Variables
var test = "Test";
var temporarypass = "Provide a Temporary Password for this user";
var semicolon = ";";
//Slår ihop alla mailadresser till en.
var email = it;
// You may replace this with another email address
//var email = Session.getActiveUser().getEmail();
var s = SpreadsheetApp.getActiveSheet();
var columns = s.getRange(1,1,1,s.getLastColumn()).getValues()[0];
var message = "";
// Only include form fields that are not blank
for ( var keys in columns ) {
var key = columns[keys];
if ( e.namedValues[key] && (e.namedValues[key] != "") ) {
message += key + ' :: '+ e.namedValues[key] + "<br><br>";
}
if (key == "Förnamn")
var fornamn = e.namedValues[key];
else if (key == "Efternamn")
var efternamn = e.namedValues[key];
else if (key == "Placering")
var placering = e.namedValues[key];
else if (key == "Titel")
var titel = e.namedValues[key];
else if (key == "Avdelning")
var avdelning = e.namedValues[key];
}
//Output for the email that is sent.
subject += ", " + fornamn + " " + efternamn ;
message +="function Remove-DiacriticsAndSpaces { Param( [String]$inputString ) $sb = [Text.Encoding]::ASCII.GetString([Text.Encoding]::GetEncoding("Cyrillic").GetBytes($inputString)); return($sb -replace '[^a-zA-Z0-9]', '')}; New-ADUser -SamAccountName (((Remove-DiacriticsAndSpaces -InputString '"+fornamn+"')+"."+(Remove-DiacriticsAndSpaces -InputString '"+efternamn+"')).ToLower()) -UserPrincipalName (((Remove-DiacriticsAndSpaces -InputString '"+fornamn+"')+"."+(Remove-DiacriticsAndSpaces -InputString '"+efternamn"')+"@test.com").ToLower()) -EmailAddress (((Remove-DiacriticsAndSpaces -InputString '"+fornamn+"')+"."+(Remove-DiacriticsAndSpaces -InputString '"+efternamn+"')+"@test.com").ToLower()) -Name '"+fornamn+" "+efternamn+"' -GivenName '"+fornamn+"' -Surname '"+efternamn+"' -Description '"+test+", "+avdelning+", "+titel+"' -Title '"+titel+"' -OfficePhone ' ' -Path 'OU=Users,OU=test,DC=intern,DC=test,DC=se' -Company 'test' -Department '"+avdelning+"' -Title '"+titel+"'; $NewPassword = (Read-Host -Prompt 'Provide a Temporary Password for this user' -AsSecureString); Set-ADAccountPassword -Identity '"+fornamn+"."+efternamn+"' -NewPassword $NewPassword -Reset; Set-ADAccountControl -Identity '"+fornamn+"."+efternamn+"' -Enabled $true";
var htmlBody = "<html><p>" + message + "</p></html>";
MailApp.sendEmail(email, subject, message, {'htmlBody': htmlBody });
} catch (e) {
Logger.log(e.toString());
}
}
Thank you!
A:
This is a very dense thing to follow up as this string is endless and full of brackets, but here is my try:
Change the brackets from the word "Cyrillic" to 'Cyrillic'
There are several "stop points" without + (only when they are next to "+fornamn+"')+ closing with pharentesis, the others are fine):
fornamn+"')+"."+(Re
should be
fornamn+"')+"+"."+"+(Re
Change the brackets from "@test.com" to '@test.com' or +"@test.com"+
There is a missing + for one "efternamn": '"+efternamn"' to '"+efternamn+"')
After these changes it didn't show any other error, but I can't tell if the injected code would work.
To save you some time, here is the full string with my corrections:
"function Remove-DiacriticsAndSpaces { Param( [String]$inputString ) $sb = [Text.Encoding]::ASCII.GetString([Text.Encoding]::GetEncoding('Cyrillic').GetBytes($inputString)); return($sb -replace '[^a-zA-Z0-9]', '')}; New-ADUser -SamAccountName (((Remove-DiacriticsAndSpaces -InputString '"+fornamn+"')+"+"."+"+(Remove-DiacriticsAndSpaces -InputString '"+efternamn+"')).ToLower()) -UserPrincipalName (((Remove-DiacriticsAndSpaces -InputString '"+fornamn+"')+"+"."+"+(Remove-DiacriticsAndSpaces -InputString '"+efternamn+"')+'@test.com').ToLower()) -EmailAddress (((Remove-DiacriticsAndSpaces -InputString '"+fornamn+"')+"+"."+"+(Remove-DiacriticsAndSpaces -InputString '"+efternamn+"')+'@test.com').ToLower()) -Name '"+fornamn+" "+efternamn+"' -GivenName '"+fornamn+"' -Surname '"+efternamn+"' -Description '"+test+", "+avdelning+", "+titel+"' -Title '"+titel+"' -OfficePhone ' ' -Path 'OU=Users,OU=test,DC=intern,DC=test,DC=se' -Company 'test' -Department '"+avdelning+"' -Title '"+titel+"'; $NewPassword = (Read-Host -Prompt 'Provide a Temporary Password for this user' -AsSecureString); Set-ADAccountPassword -Identity '"+fornamn+"."+efternamn+"' -NewPassword $NewPassword -Reset; Set-ADAccountControl -Identity '"+fornamn+"."+efternamn+"' -Enabled $true"; ```
| {
"pile_set_name": "StackExchange"
} |
Q:
How to deal with PATH in installation script for my applications?
I want to create postinst script for my application debian package and I need to modify /etc/environment file (add some path to it) to make my application bin directory content accessible globally in system.
With my current knowledge all I can do now is:
remove last " character in /etc/environment file (for now I don't know how to do it in bash, maybe I will try this: How can I remove the last character of a file in unix?
)
append :
append /usr/some/directory/bin (my application bin dir) to that file
append "
Is there easier way to add some path to environment variables permanently and globally?
Background:
I'm working on few packages to automate installation/deployment process, I have few things like Java, bash scripts, drivers and some c/c++ tool applications to deploy on many devices.
A:
The path isn’t necessarily defined in /etc/environment, and even if it is, there is no guarantee that path will end up being the path that’s used by end users.
In a Debian package, to make commands available generally, you should install them to a directory which is expected to be on the path, typically /usr/bin. If you can’t move your binaries there, it’s fine to add wrapper scripts in /usr/bin which know where to find the “real” commands.
For Java, you shouldn’t try to re-package things yourself; use java-package to package Oracle JDKs and JREs, or the OpenJDK packages already available in Debian. See Installing JDK in a FHS-compliant way and Ways to configure alternative installations of Oracle JDK on Ubuntu? for details.
| {
"pile_set_name": "StackExchange"
} |
Q:
Convert a Math Equation to output as a float value
So I have this line of code:
var maths = anInteger / keyprice;
I print it this line:
textBox1.Text = textBox1.Text + "Converted into: " + maths;
I was wondering how do I make it output a float, say I have anInteger = 42 and keyprice = 24 into a 2dp number?
A:
In C# the result of dividing 2 integers is an integer. If you want it a float then cast one of them into a float:
var maths = anInteger / (float)keyprice;
textBox1.Text = $"{textBox1.Text} Converted into: {maths.ToString("n2")}";
If prior to C# 6.0 then:
var maths = anInteger / (float)keyprice;
textBox1.Text = textBox1.Text + "Converted into: " + maths.ToString("n2");
| {
"pile_set_name": "StackExchange"
} |
Q:
Windows 8 security policy "LAN Manager Authentication Level"
I can't get into one of our enterprise apps and the app administrator told me I need to change the LAN Manager Authentication Level to "Send NTLM Responses Only". That normally happens via group policy, the problem is I'm running Windows 8 and my device is non-managed. I cannot find that particular setting in the Windows 8 policy manager. Any help?
A:
Unfortunately not all versions of Windows appear to ship the policies editor. Windows 8 doesn't for example but Windows 8 Pro does, so depending on your version you would be able to use it or not.
To see if you can access it press Win+Q to search for it or Win+R to open the "Run" dialog. Either way type gpedit.msc and if it appears in the first case or you're able to run it do so.
Then navigate to Local Computer Policy -> Windows Settings -> Security Settings -> Local Policies -> Security Options. There locate Network security: LAN Manager authentication level and set that policy to what your admin told you.
If you're not able to access the policies editor you can accomplish the same by editing the registry yourself. Concretely the key you have to edit for that policy is:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa
There, add (or edit) a DWORD value named LmCompatibilityLevel and set it to the value you require according to the following table (which in your case is 2):
0 - Send LM & NTLM responses
1 - Send LM & NTLM responses, use NTLMv2 session security if negotiated
2 - Send NTLM response only
3 - Send NTLMv2 response only
4 - Send NTLMv2 response only, refuse LM
5 - Send NTLMv2 response only, refuse LM & NTLM
I hope that helps.
| {
"pile_set_name": "StackExchange"
} |
Q:
MemSQL generate UUID
I would like to migrate the database from MySQL to MemSQL. Original database use UUID as ID generated by function UUID(). Is there any possibility, how to use a similar function for generating those ID's?
A:
MemSQL does not have a built-in UUID() function, but you can generate unique ids in various ways, depending on what you need them for, such as:
Generate random hashes. You can do this by e.g. SHA1(RAND()), or to get 16 bytes of randomness CONCAT(SUBSTRING(SHA1(RAND()), 1, 16), SUBSTRING(SHA1(RAND()), 1, 16)).
Use auto_increment to generate ids unique within a table
Generate UUIDs in the application side
If you need them to follow the UUID format, you can reformat them with string functions
| {
"pile_set_name": "StackExchange"
} |
Q:
How to create a Firebase Audience to ask for an App Store Review/Rating
I want to create a Firebase Audience to ask to rate/review my app.
The condition I would like to have for a user to fit into the above audience is: a user who has opened the app at least 10 times, over the course of 3 distinct days.
Is it possible to create an audience with this condition?
I am open to suggestions to change/improve the condition. Or even a completely different condition that will achieve the same goal.
A:
You can create a custom audience, choose events as condition and pick session_start.
Than you can choose additional options like the number of session and period.
This does not guarantee you 3 distinct days though. But by default Firebase will only count a new session every 30min. So most of those users have had their session over 3 days anyways. Firebase also gives you a preview of the audience size. So you can easily check how many users would be in that audience with a period of e.g. one day.
In general i would recommend asking users for rating who had a positive experience within your app. Completing certain action etc. and use that event for the audience.
| {
"pile_set_name": "StackExchange"
} |
Q:
how can i get the database details from another php file?
Please let me know how i can get config/database.php file in Laravel
For exam : i have file called test.php in root folder i want the database, host,user and password name from config/database.php.How can i acheive this ?
'connections' => array(
'mysql' => array(
'driver' => 'mysql',
'host' => '*********',
'database' => '********',
'username' => '********',
'password' => '*******',
'charset' => 'utf8',
'collation' => 'utf8_unicode_ci',
'prefix' => '',
),
),
A:
To get the database connection parameters (server, username, password etc) if you're using MySQL, you can do this:
echo "Driver: " . Config::get('database.connections.mysql.driver') . "<br/>\r\n";
echo "Host: " . Config::get('database.connections.mysql.host') . "<br/>\r\n";
echo "Database: " . Config::get('database.connections.mysql.database') . "<br/>\r\n";
echo "Username: " . Config::get('database.connections.mysql.username') . "<br/>\r\n";
echo "Password: " . Config::get('database.connections.mysql.password') . "<br/>\r\n";
On my local development machine this gives:
Driver: mysql
Host: localhost
Database: local_dev_db
Username: root
Password: not-my-real-pwd
...obviously you should never show your password (or any of these other details) in your live app! But if it's just for your information on a local development machine you should be fine.
If you're not using MySQL, just replace mysql with sqlite or pgsql or whatever.
| {
"pile_set_name": "StackExchange"
} |
Q:
Esoteric Programming Languages - Acceptable or Discouraged?
In A Nutshell: This is a question regarding the acceptance of asking instances related to esoteric programming languages, such as:
Brain@%#!
Ook!
LOLCODE
Omgrofl
Whitespace
Historical Significance
Proof of (real-world) usage of esoteric languages lies inside the following quote:
The game Lost Kingdom won the First Annual Classic 2k Text Adventure Competition in 2004, and has been (re)written and enhanced by the original author in brain@%#!
Source: The Lost Kingdom Brain@%# Edition
This shows that esoteric programming languages can actually be used to develop real-world applications.
Actual Questions Relating To Esoteric Programming Languages
Practical COW example program?
What good is the NERFIN loop operation in LOLCODE?
brain%!@# greater sign
The Big Cookie
Now for the moment we've all been waiting for...
Are esoteric programming languages acceptable programming questions or discouraged?
A:
The Big Cookie
Now for the moment we've all been waiting for...
Yes! Obscure programming languages are still programming languages, and therefore questions about them are still on topic for Stack Overflow.
The notion that we might encourage or discourage questions about a particular language or technology strikes me as an utterly nonsensical one. If the questions meet our guidelines, then they are on topic. The only encouragement we do and need to provide is to ask questions that are constructive and on topic. If yours meet those requirements, proceed as desired.
A:
There would be absolutely no reason to specifically discourage any language or family of languages. If you have questions on any language that would qualify as an esoteric language, feel free to ask them.
Esoteric languages are part of Stack Overflow as every other language, two of them (Python and Ruby1) are even featured in SO's 404 polyglot:
1 If you are a Pythonista or a Rubyist and feel like chopping my head off, just replace that with C and Perl.
| {
"pile_set_name": "StackExchange"
} |
Q:
Converting index of string to integers
I'm attempting to convert the characters at a specific String index to an integer.
This is the function I have:
public int[] digitsOfPi(int n) {
{
String piDigits = Double.toString(Math.PI);
int[] piArray = new int[n];
for (int i = 0; i < n; i++)
{
piArray[i] = Character.digit(piDigits.charAt(i), n);;
}
return piArray;
}
}
Unfortunately, when I test this function with digitsOfPi(3), I got
[I@15db9742
Any help would be much appreciated.
A:
Use Arrays.toString(arg)
System.out.println(Arrays.toString(digitsOfPi(3)));
... to get pretty result output like
[1, 2, 3]
What you have printed is a reference to the array, not its content. That is why I asked first about the way youre outputting the result.
Thi signature of Character.digit() is (char ch, int RADIX). So you use ternary number system or how to call it. Moreover, it returns -1 when the char ch does not contain valid number, it is your case 3.1415... is [invalid, invalid, 1]. You need to use 10 as an argument and skip 2nd position.
Or you can simply do
piArray[i] = piDigits.charAt(i) - '0';
| {
"pile_set_name": "StackExchange"
} |
Q:
hexdump vs xxd format difference
I was searching for how to do a reverse hexdump and found xxd mentioned. However, it does not seem to work with simply:
xxd -r hexdumpfile > binaryfile
I then compared the difference between outputs of xxd infile and hexdump infile, and found three differences:
xxd output has a colon after the address
xxd output has the positions in the data reversed (for example, 5a42 in hexdump output becomes 425a in xxd output)
There are some extra characters after each line
I only have the hexdumped version of certain files on a server. How can I correctly get back the binary data using xxd?
A:
There's no one command that I know of that will do the conversion, but it can easily be broken up into a few steps:
Strip addresses from hexdump output using sed
Convert into binary using xxd
Endian conversion (for example, 5a42 becomes 425a) using dd
Here's the full command:
sed 's/^[0-9]*//' hexdump | xxd -r -p | dd conv=swab of=binaryfile
| {
"pile_set_name": "StackExchange"
} |
Q:
Lost a tag badges I had last week
Last week I had 105 in the xsd tag and now I have 97? I'm sure I didn't get 8 downvotes in that time.
Can anyone help with this?
A:
There are 2 options:
One or more answers were deleted (possibly together with the question they were posted on). You don't always lose reputation for that. Check the 'show removed posts' checkbox in your reputation history.
A question was retagged, so your answer no longer counts towards that tag. Sort your answers by recent activity and see if any of the posts that used to have the tag were edited.
| {
"pile_set_name": "StackExchange"
} |
Q:
Adaboost - update of weights
i am self-studying AdaBoost - and reading the following useful article. http://www.inf.fu-berlin.de/inst/ag-ki/adaboost4.pdf . I am trying to understand, as per below, the following questions:
1) When we select and extract from the pool of classifiers, do we extract from a given pool of classifiers (e.g. an existing pool of the first 100 trees), or do we (as i presume) create the optimal classifier from scratch (e.g. a new tree with different splitting variables)?
2) i am failing to see step 3 (the update of weights) - why do we know that the new weights are the old weights multipled with $e^{a_m}$ in case of a hit?
A:
For 1), yes on both counts. You can view training a new classifier as selecting the best classifier from the "pool" defined as the range (i.e. the collection of all possible resultant classifiers) of the classification algorithm.
For 2), this re-weighting scheme is simply part of the definition of the adaboost algorithm. A reasonable question is, of course, why this choice? Reweighing in this way allows one to bound the training error with an exponentially decreasing function. Here is Theorem 3.1 from Boosting by Schapire and Freund:
Given the notation of algorithm 1.1 (adaboost) let $\lambda_t = \frac{1}{2} - e_t$, and let $D_1$ be any initial distribution over the training set. Then the weighted training error of the combined classifier $H$ with respect to $D_1$ is bounded as
$$ Pr( H(x_i) \neq y_i) \leq \exp \left( -2 \sum_t \lambda_t^2 \right) $$
You can use this to show that, if your base (weak) classifiers have a fixed edge over being random (i.e. a small bias to being correct, no matter how small), then adaboost drives down the training error exponentially fast. The proof of this inequality uses the relation (3) in a fundamental way.
I should note, there is nothing obvious about the algorithm. I'm sure it took years and years of meditation and pots and pots of coffee to come into its final form - so there is nothing wrong with an initial ??? response to the setup.
| {
"pile_set_name": "StackExchange"
} |
Q:
A moving element to push adjacent element only if they collide
I have a container with 2 children.
One child has dynamic width and at it's maximum width can fill the container
The other child has fixed width and starts off being hidden as it's starting point is to the right of the overflow:hidden container
What I want is the fixed-width child to move to the left so that it exactly fits into the right of the container such that
a) If both children fit into the container - the other element should say put on the left and
b) If there is no room for both elements - the fixed-width element should push the other element to the left as much as it needs to in order to fit into the right of the container.
Here is what I tried:
Attempt #1
.container {
width: 200px;
height: 50px;
border: 1px solid green;
overflow: hidden;
white-space: noWrap;
}
span {
height: 50px;
display: inline-block;
}
.child1 {
background: aqua;
float: right;
width: 50px;
margin-right: -50px;
transition: margin .2s;
}
.container:hover .child1 {
margin-right: 0;
}
.child2 {
background: tomato;
//width: 100%;
}
<div class="container">
<span class="child1">Fixed</span>
<span class="child2">Dynamic Width</span>
</div>
<div class="container">
<span class="child1">Fixed</span>
<span class="child2">Here is a Dynamic Width box</span>
</div>
Condition a) Succeeds but condition b) Fails
Attempt #2
.container {
width: 200px;
height: 50px;
border: 1px solid green;
overflow: hidden;
white-space: noWrap;
}
span {
height: 50px;
display: inline-block;
}
.child2 {
background: aqua;
width: 50px;
margin: 0;
float: right;
margin-right: -50px;
transition: margin .2s;
}
.container:hover .child1 {
margin-left: -50px;
}
.container:hover .child2 {
margin: 0;
}
.child1 {
background: tomato;
transition: margin .2s;
}
<div class="container">
<span class="child1">Dynamic Width</span>
<span class="child2">Fixed</span>
</div>
<div class="container">
<span class="child1">Here is a Dynamic Width box</span>
<span class="child2">Fixed</span>
</div>
Condition a) Fails and condition b) Succeeds
Can both conditions be fulfilled with CSS alone?
PS: The markup which I provided in the demos may be modified. Also CSS3 including flexbox is also fine.
A:
Here is a CSS only solution.
The trick is to use this basic rule:
Consider two or more inline elements rendered side by side.
If you increase the width of the first element, the second elements is pushed to the right.
The problem is that you need the elements to move to the left. I solved this by inverting the X direction to the child elements scaleX(-1) and re-inverting again the container.
To help you better understand this, you can comment out the transform: scaleX(-1); in the jsfiddle link below, and watch what happens.
The beauty of this is that you don't need to know the width of the .child2. You just need to push it to the left.
.container {
width: 200px;
height: 50px;
border: 1px solid green;
overflow: hidden;
white-space: nowrap;
text-align: right;
transform: scaleX(-1);
}
span {
height: 50px;
display: inline-block;
transform: scaleX(-1);
}
.child1 {
background: aqua;
width: 50px;
margin-left: -50px;
float: left;
transition: margin-left .2s;
text-align: left;
}
.child2 {
background: tomato;
}
.container:hover .child1 {
margin-left: 0;
}
<div class="container">
<span class="child1">Fixed</span>
<span class="child2">Dynamic Width</span>
</div>
<div class="container">
<span class="child1">Fixed</span>
<span class="child2">Here is a Dynamic Width box</span>
</div>
Also on jsfiddle
Solution 2
Another slightly simpler solution is to use direction: rtl; on the container. By reversing the direction of inline elements from right to left, we achieve the same effect without the need to use CSS3 transformations.
See http://jsfiddle.net/epfqjtft/12/
A:
Since css can't do conditional statements (bar media queries), I don't think this is truly possible with css alone.
update
I have seen that it is in fact possible using CSS3 transforms (which works in modern browsers). but just in case some users might want older browser support which CSS3 transforms cant provide, i'll leave this here anyway.
Apart from that, I've used positioning instead of floats to 'clean up' the styling (and attempted the jquery):
$('.container').hover(function() {
var parentWidth = $(this).width();
var thisWidth = $(this).find(".child1").width() + 50; /*i.e. width of fixed box*/
if (parentWidth < thisWidth) { /*if it doesn't fit, move it!*/
$(this).find('.child1').addClass("moveLeft");
}
}, function() {
$(this).find(".child1").removeClass("moveLeft");
});
.container {
width: 200px;
height: 50px;
border: 1px solid green;
overflow: hidden;
white-space: noWrap;
position: relative;
}
span {
height: 50px;
display: inline-block;
}
.child2 {
background: aqua;
width: 50px;
margin: 0;
position: absolute;
top: 0;
right: -50px;
transition: all .2s;
}
.child1 {
background: tomato;
transition: all .2s;
position: absolute;
top: 0;
left: 0;
}
.container:hover .child2 {
right: 0;
}
.moveLeft:hover {
left: -50px;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div class="container">
<span class="child1">Dynamic Width</span>
<span class="child2">Fixed</span>
</div>
<div class="container">
<span class="child1">Here is a Dynamic Width box</span>
<span class="child2">Fixed</span>
</div>
As for your 'solution', you will have to test if the child + 50px is greater than the parent width, if so, move child1. If not, no action is needed.
A:
Okay, I changed LinkinTED's code a little bit. Try this:
http://jsfiddle.net/epfqjtft/9/
Of course, I don't know if it's something you can work with. These types of problems should be solved with Jquery.
.container {
width: 200px;
height: 50px;
border: 1px solid green;
display: table;
table-layout: fixed;
transition: all 2s;
}
span {
height: 50px;
display: table-cell;
transition: all .2s;
}
.child1 {
background: tomato;
width: 100%;
}
.child2 {
background: aqua;
width: 0px;
overflow: hidden;
transition: all .2s;
}
.container:hover .child2 {
width: 50px;
}
<div class="container">
<div class="wrapper">
<span class="child1">Dynamic Width</span>
</div>
<span class="child2">Fixed</span>
</div>
<div class="container">
<div class="wrapper">
<span class="child1">Here is a Dynamic Width box</span>
</div>
<span class="child2">Fixed</span>
</div>
| {
"pile_set_name": "StackExchange"
} |
Q:
Strategy for naming swing components
In our Swing application, we are using an automated testing tool in QA (Qf-Test) that works better when the swing components are named. (calling Component.setName). Although their automatic name assignments work reasonably well, we are introducing SwingX components into the project and that is causing some issues for the tool.
There are a lot of potential components on a screen (your typical business app data entry screens, but a lot of them - the app is on the complexity level of an ERP), what options are there for naming swing components in a reasonably unobtrusive manner?
A:
I typically store the fields in a JPanel in properties on the JPanel, like this:
private JLabel firstNameLabel;
private JTextField firstNameTextField;
At the end of the routine that instantiates and lays out these components, you could run a routine that uses java.lang.reflect to loop through each property of the panel. If a property descends from the class Component, you can call setName on it with the name of the property. So for example, it would end up calling:
this.firstNameLabel.setName("firstNameLabel");
this.firstNameTextField.setName("firstNameTextField");
...except through java.lang.reflect
You could also have the routine examine the bumpy case names of variables and replace it with standard case with spaces. This would make them more readable.
This approach will ensure that no matter what components you add to a panel, they will all get friendly names.
| {
"pile_set_name": "StackExchange"
} |
Q:
Прямая речь и слова автора
Здравствуйте, подскажите пожалуйста относительно прямой речи и слов автора, нигде не смог найти ответ на свой вопрос.
Вопрос следующий. Если после прямой речи идут слова автора, затем точка, может ли после точки идти следующее предложение слов автора?
И правильно ли я понимаю, что слова автора — это конструкция, относящаяся непосредственно к прямой речи, а не к тексту вообще, который не является прямой речью?
Например:
— Ты не прав, — сказал Френк, посмотрев на брата. В глазах Френка читалось презрение.
(Абзац) Повисла напряженная пауза.
В данном примере текст после абзаца не относится к прямой речи и не является словами автора, я правильно понимаю?
И является ли словами автора предложение: В глазах Френка читалось презрение?
A:
В этой теме важно понимать терминологию.
1) Предложения с прямой речью. Предложения с прямой речью состоят из слов автора и собственно прямой речи, причем рассматривают различные варианты их взаимного расположения.
2) Прямая речь — высказывание, дословно введённое в авторский текст . Это точно воспроизведённая чужая речь, переданная от лица того, кто её произнёс или написал.
3) Слова автора ― условное название. Это слова, указывающие на то, кому принадлежит прямая речь (или это слова автора, вводящие прямую речь). Практически слова автора ― это часть авторского текста, примыкающего к прямой речи, так что образуется единая конструкция, оформленная с помощью определенных правил.
— Ты не прав, — сказал Френк, посмотрев на брата. Это предложение с прямой речью, оно состоит из прямой речи и слов автора, вводящих эту прямую речь. Далее следует авторский текст (не слова автора).
| {
"pile_set_name": "StackExchange"
} |
Q:
How do I setup a list of test organizations with my new DBContext?
I have an application with uses multiple contexts for EF, some are code first, some server first, and all are dynamic.
I have a class that I call to get these contexts in my app.
Each context implements an interface such as:
public interface IDatabaseContextSwitcher
{
IVGSContext GetDatabase(string organization);
IVGSContext GetDatabase(Guid organizationGuid, string email);
IVGSServerConn GetServerDatabase(string databaseName);
IAuthContext GetAuthorizationDatabase();
}
I therefore has a class that implments the instances of these interfaces in the application. (VGSContext, VGSServerConn, and AuthContext).
I am trying to test these in my test project. I have made new classes from the interfaces with the plan to plug in some DbSets into these new classes and then test that my controllers do the correct thing.
However, I can't seem to figure out how to initialize the DBSets.
For example the following dies on Add:
public AuthContextForTesting()
{
Organizations.Add(new Organization()
{OrganizationName = "Test1", PK_Organization = Guid.Parse("34CE4F83-B3C9-421B-B1F3-42BBCDA9A004")});
var cnt = Organizations.Count();
}
public DbSet<Organization> Organizations { get; set; }
I tried to initialize the DBSet with:
Organizations=new DbSet();
But it gives an error that this is not allowed due to permissions.
How do I set up my initial dbsets in code for my tests?
A:
To be able to do this, you first have to derive a class from DBSet. Sadly, my app uses both Core EF and EF 6 so I had to create 2 classes.
EF 6 Class
public class FakeDbSet<T> : System.Data.Entity.DbSet<T>, IDbSet<T> where T : class
{
List<T> _data;
public FakeDbSet()
{
_data = new List<T>();
}
public override T Find(params object[] keyValues)
{
throw new NotImplementedException("Derive from FakeDbSet<T> and override Find");
}
public override T Add(T item)
{
_data.Add(item);
return item;
}
public override T Remove(T item)
{
_data.Remove(item);
return item;
}
public override T Attach(T item)
{
return null;
}
public T Detach(T item)
{
_data.Remove(item);
return item;
}
public override T Create()
{
return Activator.CreateInstance<T>();
}
public TDerivedEntity Create<TDerivedEntity>() where TDerivedEntity : class, T
{
return Activator.CreateInstance<TDerivedEntity>();
}
public List<T> Local
{
get { return _data; }
}
public override IEnumerable<T> AddRange(IEnumerable<T> entities)
{
_data.AddRange(entities);
return _data;
}
public override IEnumerable<T> RemoveRange(IEnumerable<T> entities)
{
for (int i = entities.Count() - 1; i >= 0; i--)
{
T entity = entities.ElementAt(i);
if (_data.Contains(entity))
{
Remove(entity);
}
}
return this;
}
Type IQueryable.ElementType
{
get { return _data.AsQueryable().ElementType; }
}
System.Linq.Expressions.Expression IQueryable.Expression
{
get { return _data.AsQueryable().Expression; }
}
IQueryProvider IQueryable.Provider
{
get { return _data.AsQueryable().Provider; }
}
IEnumerator IEnumerable.GetEnumerator()
{
return _data.GetEnumerator();
}
IEnumerator<T> IEnumerable<T>.GetEnumerator()
{
return _data.GetEnumerator();
}
}
The EF Core had to include some interfaces to work.
public class FakeCoreDbSet<T> : Microsoft.EntityFrameworkCore.DbSet<T> , IQueryable, IEnumerable<T> where T : class
{
List<T> _data;
public FakeCoreDbSet()
{
_data = new List<T>();
}
public override T Find(params object[] keyValues)
{
throw new NotImplementedException("Derive from FakeDbSet<T> and override Find");
}
public override EntityEntry<T> Add(T item)
{
_data.Add(item);
//return item;
return null;
}
public override EntityEntry<T> Remove(T item)
{
_data.Remove(item);
//return item;
return null;
}
public override EntityEntry<T> Attach(T item)
{
return null;
}
public T Detach(T item)
{
_data.Remove(item);
return item;
}
public IList GetList()
{
return _data.ToList();
}
//public override T Create()
//{
// return Activator.CreateInstance<T>();
//}
public TDerivedEntity Create<TDerivedEntity>() where TDerivedEntity : class, T
{
return Activator.CreateInstance<TDerivedEntity>();
}
public List<T> Local
{
get { return _data; }
}
public override void AddRange(IEnumerable<T> entities)
{
_data.AddRange(entities);
//return _data;
}
public override void RemoveRange(IEnumerable<T> entities)
{
for (int i = entities.Count() - 1; i >= 0; i--)
{
T entity = entities.ElementAt(i);
if (_data.Contains(entity))
{
Remove(entity);
}
}
// this;
}
Type IQueryable.ElementType
{
get { return _data.AsQueryable().ElementType; }
}
System.Linq.Expressions.Expression IQueryable.Expression
{
get { return _data.AsQueryable().Expression; }
}
IQueryProvider IQueryable.Provider
{
get { return _data.AsQueryable().Provider; }
}
IEnumerator IEnumerable.GetEnumerator()
{
return _data.GetEnumerator();
}
IEnumerator<T> IEnumerable<T>.GetEnumerator()
{
return _data.GetEnumerator();
}
}
Once these were created I could use MockObjects to get to them.
Note _dbContextSwitcher is the class created with a Moq that calls the different databases.
var vgsdatabase = new Mock<IVGSContext>();
var settings=new FakeCoreDbSet<Setting>();
settings.Add(new Setting()
{
SettingID = "OrgPrivacy",
PK_Setting = Guid.NewGuid(),
UserID = "",
Value = "No"
});
vgsdatabase.Setup(s => s.Setting).Returns(settings);
_dbcontextSwitcher.Setup(s => s.GetDatabase(It.IsAny<string>())).Returns(vgsdatabase.Object);
| {
"pile_set_name": "StackExchange"
} |
Q:
Mule Oracle Database Connector SQL with IN OPERATOR
I've problems with database connector of mule, which i was using for select query. I have an arraylist of string to give inside in parameter.
Mule Sql Query - passing parameters to the IN operator
Solution mentioned above doesn't work with 3.7.3 Mule ESB, I've tried in many ways and searched for that. Except this document there is no definite way which i've founded till this time.
I am using query below :
select * from db_table where id in (2,3,4)
On my example 2,3,4 is inside my flow variable which contains arraylist.
Any suggestions ?
A:
I've solved related problem by using this answer again :
Mule Sql Query - passing parameters to the IN operator
Here in this java block, i've builded my query and put it to flow variable which is called sql :
public Object onCall(MuleEventContext eventContext) throws Exception {
// TODO Auto-generated method stub
ArrayList<String> vib_list = eventContext.getMessage().getInvocationProperty("vibs");
String locale = eventContext.getMessage().getInvocationProperty("lang_locale").toString();
StringBuilder query = new StringBuilder();
String queryBase = "select distinct(matnr) from cated_prodrelease where matnr in(";
query.append(queryBase);
int numIndices = ((ArrayList<Integer>)eventContext.getMessage().getInvocationProperty("vibs")).size();
ArrayList<String> indices = new ArrayList<String>();
for(int i=0; i<numIndices; i++) {
indices.add("'"+ vib_list.get(i) + "'");
}
query.append(StringUtils.join(indices, ", "));
query.append(") " + "AND locale = '" + locale + "' " + "AND release_type = 'PI_RELEASE'");
String finalQuery = query.toString();
eventContext.getMessage().setInvocationProperty("sql", finalQuery);
return eventContext.getMessage();
}
Then i've directly used this flow variable sql on db connector's dynamic-query parameter and it perfectly worked.
<db:select config-ref="PDP_Configuration" doc:name="Database">
<db:dynamic-query><![CDATA[#[flowVars.sql]]]></db:dynamic-query>
</db:select>
This is not an official answer but i believe that Mule developers should find a official solution to this kind of major problems. I hope it helps!
| {
"pile_set_name": "StackExchange"
} |
Q:
Example of non metrizable topological groups satisfying an extra condition.
DEFINITION- A function $f:X \to Y$ is called D-supercontinuous if inverse image of every open set is open $F_{\sigma}$ set.
I am looking for an example of a non metrizable topological group (Hausdorff) in which the group operations are also D-supercontinuous.
Obviously every metric group is such a group but I am having a hard time finding a nontrivial or non metric topological group. I know it must not be second countable.
Any help to point me in the right direction is appreciated. Thanks.
A:
Consider $\mathbb{R}^\infty$ with the weak topology, i.e. the colimit of the sequence of inclusions $\mathbb{R}^0\to\mathbb{R}^1\to\mathbb{R}^2\to\dots$ (concretely, $\mathbb{R}^\infty$ is the set of finite-support sequences of real numbers, with the topology that a set is open iff its intersection with $\mathbb{R}^n$ is open for each $n$). This is a topological group with respect to coordinatewise addition (this is not obvious--to prove addition is continuous, you have to show $\mathbb{R}^\infty\times\mathbb{R}^\infty$ also has the weak topology; see for instance Theorem A.6 in Hatcher's Algebraic Topology). It is not first countable and thus not metrizable (again, this is not obvious--you can use an argument similar to the argument in this answer, picking sequences converging to $0$ along each coordinate axis). However, every open subset of $\mathbb{R}^\infty$ is $F_{\sigma}$: an open set is the union of its intersections with $\mathbb{R}^n$ for each $n$, and the intersection with $\mathbb{R}^n$ is an open subset of $\mathbb{R}^n$ and thus a countable union of closed subsets of $\mathbb{R}^n$, which are then also closed in $\mathbb{R}^\infty$.
| {
"pile_set_name": "StackExchange"
} |
Q:
How to manage rails asset does not have gem
We always have ruby gem of famous javascript or css lib such as bootstrap-sass, ember-rails. But for some js lib such as bootstrap-lightbox, there are no gems sometimes. In order to manage these asset automatically, I found the jail(https://github.com/charly/jail) gem. But it seems that project is not so active now. Are there any better solution then just download and past file?
A:
Many of those "assets gems" are just a basic skeleton with js/css assets, it should not be too hard to build your own and publish on rubygems!
An advantage of this, beside locking versions in Gemfile, is that you have control over them and don't risk screwing everything up during a bundle update.
I have found issues using external gems for managing assets, especially with bootstrap ones, sometimes the precompilation will break or they will upgrade the assets inside, breaking the entire site (or minuscole portions that you may hardly notice) with not-so-wanted changes.
| {
"pile_set_name": "StackExchange"
} |
Q:
Where should I place code that must be run before a form is saved?
I have VM which controls what displayed, etc. In my case - it's User editing screen.
I have some code where before user saved - I take entered password, encrypt it with salt and stuff into entity property.
So, where this code belongs? Right now I'm intercepting OnBeforeSave and put it there. Should I move it to the model?
EDIT:
More info on my environment/layers:
Silverlight
Views (XAML)
ViewModel (encapsulates data access via repositories). Exposes properties and data objects to bind
Model - DevForce framework responsivle for persistance. I also use it's validation functionality. I can extend it with partial classes, etc.
A:
That's a perfect example of business logic: it doesn't belong in the model, it belongs in whatever takes the model and passes it to the DAL, or repository, for instance.
If you can post some more info about the layers, or IoC-style services your app uses, we can probably be more specific.
(MVVM is about models, view models and views.. this isn't really much to do with MVVM in fact!)
Hope that helps.
(Edit: ohai it's you again, saw another question earlier today :) I'm not stalking you..)
| {
"pile_set_name": "StackExchange"
} |
Q:
Line between point and cursor
I have created a map in which the user can click-drag to make a free-form polyline as part of a polygon. However, I can't get it to where I can see the line extending from the point I just made to your cursor. I want to implement this functionality.
I am currently using click, mousemove, etc. event listeners for the free-form polyline, and these are disabled under the drawing library.
How is is exactly that Maps Engine Lite draws a line from the point you just clicked to the cursor when drawing a polygon or polyline?
I have already looked through the DrawingManager and DrawingOptions and can't figure out how it shows a line from point to the cursor programmatically.
I'm guessing I need to find the coordinates of my cursor on mousemove, and draw a line between that location and the last point I clicked. Is this correct?
A:
try it out:
//observe click
google.maps.event.addListener(map,'click',function(e){
//if there is no Polyline-instance, create a new Polyline
//with a path set to the clicked latLng
if(!line){
line=new google.maps.Polyline({map:map,path:[e.latLng],clickable:false});
}
//always push the clicked latLng to the path
//this point will be used temporarily for the mousemove-event
line.getPath().push(e.latLng);
new google.maps.Marker({map:map,position:e.latLng,
draggable:true,
icon:{url:'http://maps.gstatic.com/mapfiles/markers2/dd-via.png',
anchor:new google.maps.Point(5,5)}})
});
//observe mousemove
google.maps.event.addListener(map,'mousemove',function(e){
if(line){
//set the last point of the path to the mousemove-latLng
line.getPath().setAt(line.getPath().getLength()-1,e.latLng)
}
});
Demo: http://jsfiddle.net/doktormolle/4yPDg/
Note: this part of your code is redundant:
var coord = new google.maps.LatLng(option.latLng.lb, option.latLng.mb);
option.latLng is already a google.maps.LatLng, you may use it directly
var coord = option.latLng;
Furthermore: you should never use these undocumented properties like mb or lb , the names of these properties are not fixed and may be changed in the next session.
| {
"pile_set_name": "StackExchange"
} |
Q:
Snake animation
I got a homework for my coding class and im not sure how to do it. I am supposed to make an animation that would light up the squares in a sort of snake type animation (it starts in top left square, continues to top right, then goes to second line but instead of starting on right side, it starts on left side). How should I do it? Please help me, I'm clueless. Below is the code I've got so far.
import tkinter as tk
import random
master = tk.Tk()
rectangle_list = []
canvas_width = 280
canvas_height = 250
w = tk.Canvas(master, width=canvas_width, height=canvas_height)
w.pack()
input1 = tk.Entry (master)
w.create_window(100, 40, window=input1)
for i in range(11):
x = i * 25
if i == 1:
rectangle_list.append(w.create_rectangle(0, 0, 25, 25, fill="grey"))
else:
rectangle_list.append(w.create_rectangle(x-25, 0, x, 25, fill="grey"))
for i in range(11):
x = i * 25
if i == 1:
rectangle_list.append(w.create_rectangle(0, 25, 25, 50, fill="grey"))
else:
rectangle_list.append(w.create_rectangle(x-25, 25, x, 50, fill="grey"))
for i in range(11):
x = i * 25
if i == 1:
rectangle_list.append(w.create_rectangle(0, 50, 25, 75, fill="grey"))
else:
rectangle_list.append(w.create_rectangle(x-25, 50, x, 75, fill="grey"))
for i in range(11):
x = i * 25
if i == 1:
rectangle_list.append(w.create_rectangle(0, 75, 25, 100, fill="grey"))
else:
rectangle_list.append(w.create_rectangle(x-25, 75, x, 100, fill="grey"))
for i in range(11):
x = i * 25
if i == 1:
rectangle_list.append(w.create_rectangle(0, 100, 25, 125, fill="grey"))
else:
rectangle_list.append(w.create_rectangle(x-25, 100, x, 125, fill="grey"))
for i in range(11):
x = i * 25
if i == 1:
rectangle_list.append(w.create_rectangle(0, 125, 25, 150, fill="grey"))
else:
rectangle_list.append(w.create_rectangle(x-25, 125, x, 150, fill="grey"))
for i in range(11):
x = i * 25
if i == 1:
rectangle_list.append(w.create_rectangle(0, 150, 25, 175, fill="grey"))
else:
rectangle_list.append(w.create_rectangle(x-25, 150, x, 175, fill="grey"))
for i in range(11):
x = i * 25
if i == 1:
rectangle_list.append(w.create_rectangle(0, 175, 25, 200, fill="grey"))
else:
rectangle_list.append(w.create_rectangle(x-25, 175, x, 200, fill="grey"))
for i in range(11):
x = i * 25
if i == 1:
rectangle_list.append(w.create_rectangle(0, 200, 25, 225, fill="grey"))
else:
rectangle_list.append(w.create_rectangle(x-25, 200, x, 225, fill="grey"))
for i in range(11):
x = i * 25
if i == 1:
rectangle_list.append(w.create_rectangle(0, 225, 25, 250, fill="grey"))
else:
rectangle_list.append(w.create_rectangle(x-25, 225, x, 250, fill="grey"))
# manage color change loop based on index of rectangle list
def uno(ndex=0):
if ndex < len(rectangle_list):
w.itemconfig(rectangle_list[ndex], fill='red')
master.after(100, uno, ndex+1)
def dos(ndex=0):
if ndex < len(rectangle_list):
w.itemconfig(rectangle_list[ndex], fill='grey')
master.after(0, dos, ndex+1)
def tres(ndex=0):
if ndex < len(rectangle_list):
w.itemconfig(rectangle_list[ndex], fill='red')
master.after(100, tres, ndex-1)
def quatros(ndex=0):
if ndex < len(rectangle_list):
w.itemconfig(rectangle_list[ndex], fill='red')
master.after(100, tres, ndex-6)
if ndex < len(rectangle_list):
w.itemconfig(rectangle_list[ndex], fill='red')
master.after(100, uno, ndex+6)
def cinq(ndex=0):
if ndex < len(rectangle_list):
w.itemconfig(rectangle_list[ndex], fill='red')
master.after(100, tres, ndex-1)
if ndex < len(rectangle_list):
w.itemconfig(rectangle_list[ndex], fill='red')
master.after(100, uno, ndex+1)
def six(ndex=0):
while True:
if ndex == len(rectangle_list):
w.itemconfig(rectangle_list[ndex], fill='red')
master.after(10, six, ndex+random.choice(rectangle_list))
def seven(ndex=0):
if ndex < len(rectangle_list):
w.itemconfig(rectangle_list[ndex], fill='red')
master.after(100, seven, ndex+3)
tk.Button(master, text="animacia", command=uno).pack(side='left', padx=10)
tk.Button(master, text="zhasni", command=dos).pack(side='left', padx=10)
tk.Button(master, text="animacia 2", command=tres).pack(side='left', padx=10)
tk.Button(master, text="animacia 3", command=quatros).pack(side='left', padx=10)
tk.Button(master, text="animacia 4", command=cinq).pack(side='left', padx=10)
tk.Button(master, text="animacia 5", command=six).pack(side='left', padx=10)
tk.Button(master, text="animacia 6", command=seven).pack(side='left', padx=10)
master.mainloop()
A:
The easiest way is to alternate your range function with range(11) and range(10,0,-1):
for i in range(11):
x = i * 25
if i == 1:
rectangle_list.append(w.create_rectangle(0, 0, 25, 25, fill="grey"))
else:
rectangle_list.append(w.create_rectangle(x-25, 0, x, 25, fill="grey"))
for i in range(10,0,-1):
x = i * 25
if i == 1:
rectangle_list.append(w.create_rectangle(0, 25, 25, 50, fill="grey"))
else:
rectangle_list.append(w.create_rectangle(x-25, 25, x, 50, fill="grey"))
...
But you should really consider minimizing repetitive code in your multiple for loops.
| {
"pile_set_name": "StackExchange"
} |
Q:
Will scribe-java work fine in appengine sandbox
Am developing an app in AppEngine and experimenting OAuth where i came across scribe-java [1] which seems to be easy and good and am planning to use the same.
Will it play in appengine (i.e. with given JRE whitelist)? Any one using it with success?
I didn't see it listed in [2] and also googling on the same didn't give me direct answers.
Thanks for your reply.
[1] https://github.com/fernandezpablo85/scribe-java
[2] http://code.google.com/p/googleappengine/wiki/WillItPlayInJava
A:
I have used scribe with Google AppEngine successfully many times.
Scribe uses java.net.HttpURLConnection which Google AppEngine supports, so there's no problem there.
Thanks for noticing that's not listed on the GAE page, will try to see if I can get Google guys to include it :)
| {
"pile_set_name": "StackExchange"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.