source
sequence | text
stringlengths 99
98.5k
|
---|---|
[
"stackoverflow",
"0036876813.txt"
] | Q:
how to remove a regex match from the source in java
I have a string as follows:
Acid Exposure (pH) Total
Total Normal
Clearance pH : Channel 7
Number of Acid Episodes 6
Time 48.6 min
Percent Time 20.3%
Mean Acid Clearance Time 486 sec
Longest Episode 24.9 min
Gastric pH : Channel 8
Time pH<4.0 208.1 min
Percent Time 86.7%
Postprandial Data (Impedance) Total
Total Normal
Acid Time 2.9 min
Acid Percent Time 1.2%
Nonacid Time 11.6 min
Nonacid Percent Time 4.8%
All Reflux Time 14.5 min
All Reflux Percent Time 6.1%
Median Bolus Clearance Time 8 sec
Longest Episode 11.2 min
NOTE: Reflux episodes are detected by Impedance and categorized as acid or nonacid by pH
I want to remove everything from Bolus Exposure (Impedance) Total to
NOTE: Reflux episodes are detected by Impedance and categorized as acid or nonacid by pH
My code is
Pattern goPP = Pattern.compile("Postprandial Data.*?Reflux episodes are detected by Impedance and categorized as acid or nonacid by pH",Pattern.DOTALL);
Matcher goPP_pattern = goPP.matcher(s);
while (goPP_pattern.find()) {
for (String df:goPP_pattern.group(0).split("\n")) {
s.replaceAll(df,"");
}
}
however the string s is the same after this as before. How can I remove the match from the source string? If that's not possible, how can I create a new string with everything but the match
A:
Strings are immutable in Java, change following code for assignment.
s.replaceAll(df,""); // wrong, no op
s = s.replaceAll(df,"");//correct
|
[
"stackoverflow",
"0037584400.txt"
] | Q:
Google Places Autocomplete - Click first result on blur
I'm using a Google Places Autocomplete and I simply want it to fire click event on the top item in the results list when blur event fire in the form field and suggestions exist.
var pac_input = document.getElementById('pick-auto');
(function pacSelectFirst(input) {
// store the original event binding function
var _addEventListener = (input.addEventListener) ? input.addEventListener : input.attachEvent;
function addEventListenerWrapper(type, listener) {
// Simulate a 'down arrow' keypress on hitting 'return' when no pac suggestion is selected,
// and then trigger the original listener.
if (type == "keydown" || type == "blur") {
var orig_listener = listener;
listener = function(event) {
var suggestion_selected = $(".pac-item-selected").length > 0;
var keyCode = event.keyCode || event.which;
if ((keyCode === 13 || keyCode === 9) && !suggestion_selected) {
var simulated_downarrow = $.Event("keydown", {
keyCode: 40,
which: 40
});
orig_listener.apply(input, [simulated_downarrow]);
} else if(event.type === 'blur') {
pac_input.value =
$(".pac-container .pac-item:first-child").text();
// $(".pac-container").delegate(".pac-item:first-child","click",function(){
// console.log("success");
// });
$(".pac-container .pac-item:first-child").bind('click',function(){
console.log("click");
});
}
orig_listener.apply(input, [event]);
};
}
// add the modified listener
_addEventListener.apply(input, [type, listener]);
}
if (input.addEventListener)
input.addEventListener = addEventListenerWrapper;
else if (input.attachEvent)
input.attachEvent = addEventListenerWrapper;
})(pac_input);
$(function() {
var autocomplete = new google.maps.places.Autocomplete(pac_input);
});
A:
Try this:
DEmo: http://jsfiddle.net/q7L8bawe/
Add this event :
$("#pick-auto").blur(function (e) {
if (e.which == 13) {
var firstResult = $(".pac-container .pac-item:first").text();
var geocoder = new google.maps.Geocoder();
geocoder.geocode({"address":firstResult }, function(results, status) {
if (status == google.maps.GeocoderStatus.OK) {
var lat = results[0].geometry.location.lat(),
lng = results[0].geometry.location.lng(),
placeName = results[0].address_components[0].long_name,
latlng = new google.maps.LatLng(lat, lng);
$(".pac-container .pac-item:first").addClass("pac-selected");
$(".pac-container").css("display","none");
$("#pick-auto").val(firstResult);
$(".pac-container").css("visibility","hidden");
}
});
} else {
$(".pac-container").css("visibility","visible");
}
});
|
[
"stackoverflow",
"0016614447.txt"
] | Q:
Running one test on a remote test controller/agent
I have unit tests set up to run on a build server. I just added a codedUI test which isn't running because I need to set the controller to run in interactive mode. Because we couldn't alter the existing build controller, we set up a machine with its own controller/agent combo.
How can I, within visual studio, tell one of the tests (coded UI) to run under this controller/agent, while keeping the others as they are? I looked into testsettings files but it's not clear how I can get this done.
The controllers/agents are 2010, I'm on vs2012.
A:
First, you have to configure your controller to run with Visual Studio. So, open the Test Controller Configuration tool and check that the Register with Team Project Collection option is not selected.
Then, from visual studio (2012):
Right click on the solution and select add new item. Add a new Test Setting file.
In the Test Settings window go on the Roles tab. Select Remote Execution and add the controller's (machine) name or ip in the Controller field.
After you saved your settings, select Test --> Test Settings ---> Select Test Settings File and select your new settings.
|
[
"stackoverflow",
"0060158984.txt"
] | Q:
Stratified random sample to match a different table in BigQuery
This should be a simple extension of this question, but my result is not correct and I can't figure it out. I'd like the proportions in the table I'm drawing from to match the proportions of another table. I'd also like to have it stratified by two categories. I think it should be something like:
WITH table AS (
SELECT *
FROM `another_table` a
), table_stats AS (
SELECT *, SUM(c) OVER() total
FROM (
SELECT cat1, cat2, COUNT(*) c
FROM table
GROUP BY cat1, cat2
HAVING c>1000000)
)
SELECT COUNT(*) samples, cat1, cat2, ROUND(100*COUNT(*)/MAX(c),2) percentage
FROM (
SELECT id, cat1, cat2, c
FROM table `fh-bigquery.reddit_comments.2018_09`
JOIN table_stats b
USING(cat1, cat2)
WHERE RAND()< 1000/total
)
GROUP BY 2, 3
This should give about 1000 rows, but the result is much higher, and the percentage calculation is off.
A:
I think your rand() comparison is off:
WITH table AS (
SELECT a.*
FROM `another_table` a
),
table_stats AS (
SELECT cc.*, SUM(c) OVER () as total
FROM (SELECT cat1, cat2, COUNT(*) as c
FROM table
GROUP BY cat1, cat2
HAVING c > 1000000
) cc
)
SELECT COUNT(*) as num_samples, cat1, cat2, ROUND(100*COUNT(*)/MAX(c),2) percentage
FROM (SELECT id, cat1, cat2, c
FROM (select t.*, COUNT(*) OVER () as t_total,
COUNT(*) OVER (PARTITION BY cat1, cat2) as tcc_total
from table `fh-bigquery.reddit_comments.2018_09` t
) t JOIN
table_stats b
USING (cat1, cat2)
WHERE RAND() < (1000.0 / t.t_total) * (c / total) / (tcc_total / t_total)
) t
GROUP BY 2, 3;
Note that you need the total size of the second table to get the sample size (approximately) correct.
This is also random. If you really want a stratified sample, then you should do an nth sample on an order set. If that is of interest to you, then ask a new question, with appropriate sample data, desired results, and explanation.
|
[
"stackoverflow",
"0028563378.txt"
] | Q:
Pivoting Data in SQL Server
I have a table in sql server created using this code
CREATE TABLE t3
(`NAME` varchar(20), `VALUE` varchar(17));
INSERT INTO t3
(`NAME`, `VALUE`)
VALUES
('Name_Screened', 'johny bravo'),
('Name_Screened', 'JOHNY CHAVO'),
('Match_Type', 'Direct'),
('Match_Type', 'Direct'),
('Disposition', 'Successful'),
('Disposition', 'Successful'),
('Compliance_Approval', 'Yes'),
('Compliance_Approval', 'Yes'),
('Supporting_Documents', 'Lexix Nexis Match'),
('Supporting_Documents', 'WORD NET MATCH');
I am trying to pivot this data using the code
SELECT PVT.Match_Type, PVT.Name_Screened,PVT.Disposition,PVT.Compliance_Approval,PVT.Supporting_Documents
FROM T3
PIVOT (
max(VALUE)
FOR NAME IN (Match_Type,Name_Screened,Disposition,Compliance_Approval,Supporting_Documents)
) PVT
but i am only getting one row like this
Match_Type - Name_Screened - Disposition - Compliance_Approval - Supporting_Documents
Direct - JOHNY CHAVO - Successful - Yes - WORD NET MATCH
i want two rows from the 10 data rows but get only one
I think i am missing out on right aggregation function in pivot only. Please help
A:
You need to slightly modify your query:
SELECT PVT.Match_Type, PVT.Name_Screened, PVT.Disposition,
PVT.Compliance_Approval,PVT.Supporting_Documents
FROM (
SELECT NAME, VALUE,
ROW_NUMBER() OVER (PARTITION BY NAME ORDER BY VALUE) AS rn
FROM T3 ) AS src
PIVOT (
MAX(VALUE)
FOR NAME IN (Match_Type, Name_Screened, Disposition,
Compliance_Approval,Supporting_Documents)
) pvt
This will produce a separate row for each rn value of src.
|
[
"stackoverflow",
"0038449187.txt"
] | Q:
Scala Anorm select 2 values using column resolution
According to documentation (https://www.playframework.com/documentation/2.5.x/Anorm), I can do the following to retrieve values from 2 columns:
val res: (String, Int) = SQL"SELECT text, count AS i".map(row =>
row[String]("text") -> row[Int]("i")
)
This does not compile...
Causes this:
Expression of type SimpleSql[(String, Int)] doesn't conform to expected type (String, Int)
I'm just looking for a single method of doing this (for anorm 2.5+). I was using regular parsers but am looking for this more concise way of doing it.
A:
The code is not complete: to get a single result as such tuple, the .single combinator must be used.
val res: (String, Int) = SQL"SELECT text, count AS i".map(row =>
row[String]("text") -> row[Int]("i")
).single
Using Anorm flatteners is easier for tuple result: see examples
|
[
"ell.stackexchange",
"0000003067.txt"
] | Q:
Can an adjective phrase include conjunction?
Probably, if I had lately left a good home and kind parents, this would have been the hour when I should most keenly have regretted the separation; that wind would then have saddened my heart; this obscure chaos would have disturbed my peace: as it was, I derived from both a strange excitement, and reckless and feverish, I wished the wind to howl more wildly, the gloom to deepen to darkness, and the confusion to rise to clamour. (Jane Eyre)
I guess ‘and’ is not necessary or an awkward word, for ‘reckless and feverish’ makes an adjective phrase itself. If this is right, is ‘and’ used just for a literary rhythm, or can an adjective phrase include conjunction?
A:
Reckless and feverish pre-modifies modifies I. If you imagine a comma after and, the meaning should become clear.
|
[
"stackoverflow",
"0032009085.txt"
] | Q:
Compare tuples of different sizes
Why isn't it possible to compare two tuples of different size like this:
#include <tuple>
int main() {
std::tuple<int, int> t1(1, 2);
std::tuple<int> t2(1);
if(std::tuple_size<decltype(t1)>::value == std::tuple_size<decltype(t2)>::value)
return (t1 == t2);
else
return 0;
}
I know that t1==t2 is not possible. But in this example it wouldn't be executed. Is there a possibility to compare tuples of different sizes?
A:
operator== requires the tuples to be of equal lengths.
§ 20.4.2.7 [tuple.rel]:
template<class... TTypes, class... UTypes>
constexpr bool operator==(const tuple<TTypes...>& t, const tuple<UTypes...>& u);
1 Requires: For all i, where 0 <= i and i < sizeof...(TTypes), get<i>(t) == get<i>(u) is a valid expression returning a type that is convertible to bool. sizeof...(TTypes) == sizeof...(UTypes).
If you want two tuples of different lengths to be considered unequal, you'd need to implement this logic yourself:
template <typename... Ts, typename... Us>
auto compare(const std::tuple<Ts...>& t1, const std::tuple<Us...>& t2)
-> typename std::enable_if<sizeof...(Ts) == sizeof...(Us), bool>::type
{
return t1 == t2;
}
template <typename... Ts, typename... Us>
auto compare(const std::tuple<Ts...>& t1, const std::tuple<Us...>& t2)
-> typename std::enable_if<sizeof...(Ts) != sizeof...(Us), bool>::type
{
return false;
}
DEMO
This way, the code comparing two tuples, t1 == t2, is instantiated only when the lengths of tuples match each other. In your scenario, a compiler is unable to compile your code, since there is no predefined operator== for such a case.
|
[
"stackoverflow",
"0049189041.txt"
] | Q:
MySQL: Enum vs Varchar-with-Index
Suppose, I have a requirement to create a table where one of the column will be having a value from this limited and never-changing set:
'all', 'local', 'qa', 'staging', and 'production'.
Using enum data type for this situation looks like a suitable solution but after reading this article and some other threads on the internet, I feel discouraged to use it. So, if I do not want to create a lookup table and keeping the combination of evn and name unique is also a requirement then what is my best options between a column with ENUM type and a column with VARCHAR type but with an index created on it.
Also considering that the insertion in this table will be rare and we want this particular query to execute faster:
SELECT `enabled` FROM `features`
WHERE `name` = 'some_featuere'
AND `env` IN('all', 'qa')
ORDER BY `enabled` ASC limit 1;
Which one of these is a better design and why?
CREATE TABLE `features` (
`id` INTEGER NOT NULL AUTO_INCREMENT,
`name` VARCHAR (50) NOT NULL,
`env` ENUM('all', 'local', 'qa', 'staging', 'production') NOT NULL,
`enabled` TINYINT(1) DEFAULT 0,
`created_at` DATETIME,
`updated_at` DATETIME,
PRIMARY KEY (`id`),
UNIQUE KEY `idx_unq_features_name_env` (`name`,`env`)
);
OR
CREATE TABLE `features` (
`id` INTEGER NOT NULL AUTO_INCREMENT,
`name` VARCHAR (50) NOT NULL,
`env` VARCHAR(10) NOT NULL,
`enabled` TINYINT(1) DEFAULT 0,
`created_at` DATETIME,
`updated_at` DATETIME,
PRIMARY KEY (`id`),
INDEX `idx_features_env` (`env`),
UNIQUE KEY `idx_unq_features_name_env` (`name`,`env`)
);
A:
The short answer to your question is "neither" because your query would use an index on name/env in both scenarios. However, if I had to settle on one, I'd go for VARCHAR over ENUM as the lesser to two evils but I think there might be some other issues with your approach.
First, the VARCHAR option is only going to duplicate an issue with ENUM mentioned in that article (i.e. the addition of attributes or related data) while losing possibly the only advantage you might gain from ENUM, that of data integrity. You'd get data integrity via a lookup without the evilness of the ENUM.
Second, you might be focusing on a performance issue that doesn't exist with the query. How often is it run? How slow is it? As it stands, you have a index on NAME/ENV and the only way I can't think of to speed up the query would be a covering index to include ENABLED but I doubt it's a performance killer as it is and I suspect you'd see very little difference joining to a lookup table.
Third, 'ALL' as an option makes very little sense unless a feature should only ever be deployed in one environment at a time or in ALL of them simultaneously. If that doesn't hold true, you would necessarily have to delete all other records related to the feature name whenever you wanted to apply the 'ALL' option. 'ALL' would also prevent selectively enabling/disabling features in different environments or separately recording create/update events. That's introducing data management issues that don't need to exist.
Fourth, while the columns ID, NAME, CREATED_AT, UPDATED_AT are all attributes that appear to relate directly to a Feature. The columns ENV and ENABLED relate to where and how that Feature is deployed. At first glance, that suggests storing this data in a completely separate table (possibly with CREATED_AT and UPDATED_AT to indicate when they were first deployed and last updated). I'd personally go with Feature, Environment and Feature_Environment as separate tables with Foreign Keys from Feature_Environment to the other two.
|
[
"stackoverflow",
"0047668387.txt"
] | Q:
Regex to get all the values in the html select tag
I'm trying to fetch all the values associated from a specific dropdown box
Example:
<select id='countries'>
<option value='0'>All Categories</option>
<option value='1'>USA</option>
<option value='2'>China</option>
<option selected='selected' value='3'>India</option>
<option value='4'>Japan</option>
</select>
<select id='Gender'>
<option value='0'>All Categories</option>
<option selected='selected' value='1'>Male</option>
<option value='2'>Female</option>
</select>
<select id='Body_ddlSite'>
<option value='1'>Select-</option>
<option value='2'>ECOSPACE</option>
<option selected='selected' value='3'>MILLENNIUM TOWERS</option>
<option value='4'>ABMIT-MT</option>
</select>
Note: consider above html as plain string
Result Should Be: For id='Gender'
0
1
2
OR
All Categories
Male
Female
For now I have tried to get all the <option> for a specifier <select>...</select> block by using this regex:
(?<=id='Gender'>)((.|\n)*?)(?=</select>)
Result of above regex:
<option value='0'>All Categories</option>
<option selected='selected' value='1'>Male</option>
<option value='2'>Female</option>
But now I want to fetch all the value associated with it.
A:
This works for me:
(?:\G(?!\A)|\bid='Gender'>)\s*<option\s[^<]*?value='(?<val>\d+)'>(?<txt>[^<]*)</option>
|
[
"stackoverflow",
"0039126106.txt"
] | Q:
Google People API in PHP
I try to implement People API, after successfully OAuth2, when try to load people, error is:
Undefined property: Google_Service_People_Resource_People::$connections
This is lines who produce error:
$people_service = new Google_Service_People($client);
$connections = $people_service->people->connections->listConnections('people/me');
Am going by this tutorial https://developers.google.com/people/v1/getting-started,
and this: https://developers.google.com/people/v1/requests.
Thanks
A:
I think you are looking for...
$connections = $people_service->people_connections->listPeopleConnections('people/me');
|
[
"bicycles.stackexchange",
"0000059856.txt"
] | Q:
Independent drivetrains on tandem bicycle
On a standard tandem frame, is it possible to build a tandem bicycle with independent drivetrains for the two riders? Is there a custom frame builder that would build a tandem like this?
I'm aware of the half-recumbent tandem that has independent drivetrains, but I'm specifically interested in the classic/normal tandem frame.
EDIT: By independent drivetrain, I mean that I would like each rider to be able to coast independently of the other. The cadence can remain constant between front and back when both are pedaling.
A:
As Chris H points out in his comment, this could mean two different things.
The half-recumbent/half-upright design does have separate gearing for each rider.
There's a different system that some conventional recumbents use that gives each rider the ability to coast independently, but they pedal at the same cadence: both sets of cranks drive a jackshaft (located just in front of the stoker's crank), which in turn drives the rear wheel.
There have been numerous ideas for bike drivetrains over the past century, and it's easy to imagine that other ideas have been tried out and lost to history.
A:
Here's a picture of a half recumbent tandem
from https://www.ucycle.com/merchant/2856/images/zoom/hase-pino-allround.jpg
Here is a link to a video of people riding a recumbent tandem.
https://binged.it/2TXkxJC
The video does a good job of explaining that the front crank has a freewheel mechanism that lets the riders pedal at different speeds. The front rider can also stop pedaling.
If this is what is meant as "independent drivetrains" then the functionality is in the crank rather than the frame itself.
The key to having a regular tandem like the recumbent tandem in the video is to find someone who makes a crank with a freewheeling chain ring. Like the old school Schwinn Suburban's from the late 70s with Shimano Positron FFS (Front Freewheel System).
Here's a link to a product for mountain bikes with a freewheeling chain ring
https://dirtmountainbike.com/news/hxr-easy-shift-crankset-allows-change-gear-without-pedalling.html
With this part - or something like it - any tandem frame builder should be able to get you going.
A:
I know of two solutions to your question - I'll post two brands, however they are not meant as an advertisement, rather as a starting point for further search.
One is used by Onderwater Fiets from Amsterdam. Their tandems (and also more persons bikes) combine the stoker and pilot function and are meant for families. There, the drive is permanent between the last bottom bracket and the rear wheel and other bottom brackets are somehow free-wheeled so the passengers in front can pedal individually.
The other is R&B Fly, not much can be found about those on the Internet. Funny thing, I own one, hence the independent drive can be further inspected. There, each chainring (single for the pilot; double, same size for the stoker) attached to the crankset is freewheeled, thus pedalling can be done independent. Anyway, this bike awaits some renovation, I haven't ridden it yet.
Thus, it is possible to build a tandem with independent drivetrains and with enough invention you can even build one where the cadence don't need to match (add a front derailleur and triple chainring to the pilot's seat for higher / same / lower cadence as the stoker).
|
[
"stackoverflow",
"0012600483.txt"
] | Q:
Create application to connect with computer
I want to create application which I want to use to turn off my computer. I think about connection phone with computer in WIFI. But I don't know how can I send commend to computer and how computer translate this commend and turn off computer. Can you show me the way how can I start and what I should to know to create application like this?
A:
You can achieve that with the client/server paradigm. You'll need to develop an android app that will work as client and a pc app that will work as server.
On the server side you need to wait for two commands:
Discover: Used for the client to know what servers are available
Action: Used to let the server know that it must shutdown itself
|
[
"stackoverflow",
"0050282688.txt"
] | Q:
onClickEvent in PlaybackSupportFragment's Presenter
Hi I'm building PlaybackSupportFragment and after adding extra row below player controls, the Presenter's onClickEvent does not work, I mean does not catches at all.
ArrayObjectAdapter episodeRow = new ArrayObjectAdapter(new EpisodePresenter(mother));
for(Episode episode : episodes)
episodeRow.add(episode);
superAdapter.add(new ListRow(new HeaderItem(0, "Episodes"), episodeRow));
setAdapter(superAdapter);
and EpisodePresenter itself is (class of course extends android.support.v17.leanback.widget.Presenter):
@Override
public ViewHolder onCreateViewHolder(ViewGroup parent) {
TextView view = new TextView(parent.getContext());
view.setLayoutParams(new ViewGroup.LayoutParams(315, 175));
view.setFocusable(true);
view.setFocusableInTouchMode(true);
view.setBackgroundColor(context.getResources().getColor(R.color.default_background));
view.setTextColor(Color.WHITE);
view.setGravity(Gravity.CENTER);
return new ViewHolder(view);
}
@Override
public void onBindViewHolder(ViewHolder viewHolder, Object item) {
TextView vHolder = (TextView) viewHolder.view;
final Episode model = (Episode) item;
vHolder.setText(model.name);
vHolder.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
// Does not work at all
Toast.makeText(context, "dsadas", Toast.LENGTH_SHORT).show();
}
});
}
@Override
public void onUnbindViewHolder(ViewHolder viewHolder) {
}
What is a solution to that?
A:
Fixed by using setOnEditorActionListener instead of onClickListener
|
[
"stackoverflow",
"0028755159.txt"
] | Q:
Finding all the indexes of some elements on a given list. Can it be done in less than O(n^2) without arrays in Haskell?
Given 2 lists of unique, orderable, non-contiguous elements, say:
['d', 'a', 'z', 'b']
I want to find their index in another list, say:
['a', 'b', 'z', 'd']
The result would be a list with their positions:
[3, 0, 2, 1] -- element at 0 is at 3,
-- element at 1 is at 0, etc.
A:
One simple solution is to create a Data.Map or hash table using the second list so you can have O(log n) index lookups instead of O(n) ones.
A:
This can be also done in O(n log n) time with a couple of sorts. I assume that the second list is a permutation of the first one.
import Data.List
import Data.Ord
import Data.Function
correspIx :: Ord a => [a] -> [a] -> [(Int, Int)]
correspIx = zip `on` map fst . sortBy (comparing snd) . zip [0..]
correspIx returns a list of pairs with the indices corresponding to each other:
correspIx "dazb" "abzd" == [(1,0),(3,1),(0,3),(2,2)]
We need another sort to get the result indicated in the question:
correspIx' :: Ord a => [a] -> [a] -> [Int]
correspIx' xs ys = map snd $ sortBy (comparing fst) $ correspIx xs ys
Now correspIx' "dazb" "abzd" == [3,0,2,1].
|
[
"ux.stackexchange",
"0000045619.txt"
] | Q:
Is a book more user friendly than an e-book?
Lately I've had this thought that books were more user friendly than applications. Why do I say that? Because of these reasons:
You can visibly see, at a given time where you are and how much you have left
You don't have to rely on anything breaking down (other than your eyes?) in order to read.
You can pass it to other people to read and they can give it back, and you can see what they've done to your book.
My question is, is there research that can prove if a book is more user friendly (and more desirable) than an e-book?
A:
Is a book more user friendly than an e-book?
Yes.
But note that an e-book is also more user friendly than a book.
Context is important. Users are important. Objectives and goals are important. And all will play into that question and produce a different answer for different people.
A:
Paper has numerous advantages over a simple digital medium (in addition to those you mentioned):
You can feel pages (a book is much more physically responsive than a tablet)
You can use pages to perform various tasks (bookmarking, etc.)
You can modify it (take notes, highlight)
Paper has cultural inertia (we have used it pretty much forever)
The digital medium can, for the most part, replicate these features and many modern e-readers provide this and more (including features like instant dictionary lookup, search and hyperlinking, which aren't possible with paper).
Despite this, many studies have findings [1] which show that individuals often still have a preference for paper. An interesting study on making the digital medium more competitive with paper found the digital medium lacking in these affordances [2]:
Tangibility: paper can be touched, moved around, "zoomed", etc.
Annotation: as mentioned previously, you easily modify paper
Page orientation: paper maintains its physical orientation, whereas orientation is easily lost in a digital document (this is probably why e-readers maintain the distinct page metaphor instead of merging all text into a single view)
Multiple Displays: Paper provides an unlimited amount of "displays" because it is easy to lay anything you need out in front of you
Sharing: as you mentioned, paper is easily sharable (just hand your book over)
Legibility: this concern is a bit dated (see e-ink paper and high contrast displays), but this study found that digital displays were harder to read than paper
[1]: Ruth Wilson. 2002. The "look and feel" of an ebook: considerations in interface design. In Proceedings of the 2002 ACM symposium on Applied computing (SAC '02). ACM, New York, NY, USA, 530-534. DOI=10.1145/508791.508893 http://doi.acm.org/10.1145/508791.508893
[2]: Bill N. Schilit, Gene Golovchinsky, and Morgan N. Price. 1998. Beyond paper: supporting active reading with free form digital ink annotations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '98), Clare-Marie Karat, Arnold Lund, Joëlle Coutaz, and John Karat (Eds.). ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 249-256. DOI=10.1145/274644.274680 http://dx.doi.org/10.1145/274644.274680
A:
It's not. It's just user-friendly in a different way. There are things you cannot do with your ebook - like see at a glance where your bookmark is, read without power (well, e-ink consumes not really much power, but still), you can easily give it to your friend, not loosing a possibility to read another one.
But you cannot instantly send this book at a distance (like sharing fragments via email), read it at night without an external light (some e-readers are backlit) etc.
So it's not a comparison of how much one is user-friendly against the other one but what features gives one and another. They are different, even though the main feature remains the same.
It's like comparing a bike and a car - they are substitutes at some very general level, but when you go deeper you find out that they are completely different stories.
|
[
"magento.stackexchange",
"0000193417.txt"
] | Q:
Hide attribute menu for admin role
I make a user role that needs to create a configurable product, to do this I need to give him the ACL for Store/Attribute/product.
But I don't want to let him create / add / delete an attribute, so I remove the create new attribute button in the configuration panel.
There is still one problem, the user can go to Store > Attribute > Product menu.
How can I make it invisible for my user role?
A:
Magento 2 provide us "User Roles" for this issue.
1) Create new User Roles
2) Assign Resources like the image below to disable Store > Attribute > Product
This will be disabled Store > Attribute > Product Menu and when you add configurable product. User can't click button Add Attributes
You can reference this document
|
[
"stackoverflow",
"0049686903.txt"
] | Q:
Difference between "ionic cordova plugin add" and "npm install @ionic-native/plugin --save"
I have been trying to use the ionic-native plugins provided by Ionic 3. When I read the install instructions, there are always 2 command lines instead of one.
ionic cordova plugin add cordova-plugin-camera
npm install --save @ionic-native/camera
If my memory serves me right, only a single command similar to ionic plugin add somepluginhere will get the job done in the old days.
What are the differences here?
A:
The difference is they are different packages.
ionic cordova plugin add
This command will download the cordova plugin - in this case, camera and set the config.xml , package.json, save in plugins folder and set it for each of your platforms.
Ionic leverages the cordova CLI to do this.
ionic-native
Ionic Native is simply a wrapper to the corresponding plugin.
npm install --save @ionic-native/camera
It installs the package @ionic-native/camera to your node-modules folder and sets that in package.json and nothing more.
This wrapper allows you to inject the corresponding cordova plugin as an Angular provider wherever you need instead of trying to declare the global variable and other workarounds.
|
[
"english.stackexchange",
"0000356711.txt"
] | Q:
How do I express the position of an object by counting from the last?
I have some events. a occurred the the first and d occurred the last. The order is as follows
a > b > c > d
How do I express the position of c. I'd like to count from the last. Is it correct to say?
c is the second last event
What are the alternative words?
A:
The word you are looking for is penultimate.
M-W:
penultimate: next to the last, e.g., the penultimate chapter of a book
|
[
"stackoverflow",
"0045564644.txt"
] | Q:
node js , mongodb : check if an object already exist
I'am newbie to nodejs and mongodb, so how can I check if an object already exist in the collections , Note that my field type in the schema is object or JSON
const BillSchema = mongoose.Schema(
{
content: {
type: Object //or JSON
},
}
);
const Bill = module.exports = mongoose.model('Bill', BillSchema);
module.exports.addBill = function (newBill, callback) {
//Check for all bill titles and content, if newBill doesn't exist then add else do nothing
Bill.count({ content: newBill.content }, function (err, count) {
//count == 0 always ???
if (err) {
return callback(err, null);
} else {
if (count > 0) {
//The bill already exists in db
console.log('Bill already added');
return callback(null, null);
} else { //The bill doesnt appear in the db
newBill.save(callback);
console.log('Bill added');
}
}
});
}
A:
One Of Nice Question You asked, I was suppose to achieve the same task before, I make the use of mongoose-unique-validator third party npm Package, & plugin to our schema
https://www.npmjs.com/package/mongoose-unique-validator
npm install mongoose-unique-validator
var uniqueValidator = require('mongoose-unique-validator');
const BillSchema = mongoose.Schema(
{
content: {type:Object , unique:true },
}
);
BillSchema.plugin(uniqueValidator, {message: 'is already taken.'});
Usage:
module.exports.addBill = function (newBill, callback) {
newBill.save(callback);
}
I Hope If this work for you too.
|
[
"stackoverflow",
"0060917374.txt"
] | Q:
how to convert the undefined to string in TestCafé?
Need to convert the Selector's attribute into string. So, that I can take the particular portion of the id's text.
async getTitleID(TitleName){
var TitleID = Selector('span').withText(TitleName);
console.log(TitleID);
var getTitleID = await TitleID.getAttribute('id');
console.log(getTitleID);
var getTitleIDStr = (getTitleID.toString());
// if( getTitleID!=null ){
console.log(getTitleIDStr);
var Title = getTitleIDStr.substring(40, 51);
console.log(Title);
// }
return Title
}
I got the error code: 1) TypeError: Cannot read property 'substring' of undefined
A:
You can't convert 'undefined' to string or anything. 'Undefined' is a result of the toString() method call which returned 'nothing' and this 'nothing' does not have the 'substring' method. I recommend that you debug your test case code to see what's going on: Debug Tests.
|
[
"stackoverflow",
"0037466781.txt"
] | Q:
constructing polynomials by recursion in sage
I'd like to construct the following family of polynomials :
https://math.stackexchange.com/questions/1801056/construction-of-polynomials-in-sagemath
I tried using the function
R=PolynomialRing(QQ,'x',n) but doesn't work.
The difficulty is that I can't do recursion on the indexes of the variables.
A:
Is this what you are looking for?
n=36
x=['x%d' % (k) for k in range(n)]
R=PolynomialRing(QQ,x)
x=[R.gen(i) for i in range(n)]
a=[x[k]*reduce(lambda a,b: a+b, x[:k]) for k in range(1,35)]
For example
a[7]
gives
x0*x8 + x1*x8 + x2*x8 + x3*x8 + x4*x8 + x5*x8 + x6*x8 + x7*x8
|
[
"stackoverflow",
"0043849831.txt"
] | Q:
Cannot configure the publishing extension for gradle
I am using gradle v3.4 with the following build.gradle file. However, I get the error copied below with any tasks. Any thoughts on what might be misconfigured in the build.gradle file?
error
What went wrong:
A problem occurred evaluating root project 'some-test'.
> Cannot configure the 'publishing' extension after it has been accessed.
The error points to where the publishing task begins.
build.gradle
group 'some.group'
version '0.0.1' //-SNAPSHOT'
apply plugin: 'java'
apply plugin: 'application'
apply plugin: 'idea'
apply plugin: 'com.google.protobuf'
apply plugin: 'maven'
apply plugin: 'maven-publish'
apply plugin:'com.github.johnrengelman.shadow'
buildscript {
repositories {
mavenLocal()
mavenCentral()
maven {
credentials {
username project.properties['nexusUsername']
password project.properties['nexusPassword']
}
url project.properties['nexus.url.snapshot']
}
jcenter()
}
dependencies {
classpath 'com.google.protobuf:protobuf-gradle-plugin:0.8.0'
classpath 'com.github.jengelman.gradle.plugins:shadow:1.2.4'
}
}
if (!JavaVersion.current().java8Compatible) {
throw new IllegalStateException("Must be built with Java 8 or higher")
}
mainClassName = "com.some.project.some.class"
defaultTasks 'clean', 'build', 'shadowJar', 'install'
compileJava {
sourceCompatibility = '1.8'
targetCompatibility = '1.8'
}
[compileJava, compileTestJava]*.options*.encoding = 'UTF-8'
repositories {
maven {
credentials {
username project.properties['nexusUsername']
password project.properties['nexusPassword']
}
url project.properties['nexus.url.snapshot']
}
mavenLocal()
mavenCentral()
jcenter()
}
def grpcVersion = '1.1.2'
def log4j2Version = '2.8.1'
def configVersion = '1.3.1'
def jacksonVersion = '2.8.7'
dependencies {
compile "io.grpc:grpc-netty:${grpcVersion}"
compile "io.grpc:grpc-protobuf:${grpcVersion}"
compile "io.grpc:grpc-stub:${grpcVersion}"
compile group: 'com.fasterxml.jackson.core', name: 'jackson-core', version: jacksonVersion
compile group: 'com.fasterxml.jackson.core', name: 'jackson-databind', version: jacksonVersion
compile group: 'com.fasterxml.jackson.core', name: 'jackson-annotations', version: jacksonVersion
compile group: 'org.apache.logging.log4j', name: 'log4j-api', version: log4j2Version
compile group: 'org.apache.logging.log4j', name: 'log4j-core', version: log4j2Version
compile 'io.netty:netty-tcnative-boringssl-static:1.1.33.Fork26'
compile group: 'org.bouncycastle', name: 'bcprov-jdk16', version: '1.46'
compile "com.typesafe:config:${configVersion}"
testCompile group: 'junit', name: 'junit', version: '4.11'
testCompile "org.mockito:mockito-core:1.9.5"
compile 'commons-lang:commons-lang:2.3'
}
sourceSets {
main {
proto {
srcDir '../../proto' // In addition to the default 'src/main/proto'
}
java {
}
}
test {
proto {
srcDir '../../proto' // In addition to the default 'src/test/proto'
}
}
}
protobuf {
protoc {
artifact = 'com.google.protobuf:protoc:3.2.0'
}
plugins {
grpc {
artifact = "io.grpc:protoc-gen-grpc-java:${grpcVersion}"
}
}
generateProtoTasks {
all()*.plugins {
grpc {
// To generate deprecated interfaces and static bindService method,
// turn the enable_deprecated option to true below:
option 'enable_deprecated=false'
}
}
}
}
idea {
module {
sourceDirs += file("${projectDir}/build/generated/source/proto/main/java");
sourceDirs += file("${projectDir}/build/generated/source/proto/main/grpc");
}
}
jar {
manifest {
attributes(
'Class-Path': configurations.compile.collect { it.getName() }.join(' '),
'Main-Class': 'com.some.project.some.class'
)
}
}
shadowJar {
baseName = 'commons-java'
classifier = null
version = null
}
artifacts {
archives shadowJar
}
publishing {
publications {
shadow(MavenPublication) {
from components.shadow
}
}
repositories {
maven {
credentials {
username "someuser"
password "somepassword"
}
url "http://nexus.somewhere.com/repository/some-test-snapshots/"
}
}
}
tasks.publish.dependsOn 'shadowJar'
startScripts.enabled = false
A:
The issue was how I was reading the properties - also, I no longer use both maven and maven-publish plugins (I am using maven-publish only). I am currently able to publish to nexus successfully.
|
[
"stackoverflow",
"0059195177.txt"
] | Q:
How to append row from itertuples to dataframe withouth losing the index in Python?
I have the following problem:
I have a DataFrame df which looks like this:
eqpid recpid queuetime trackintime trackouttime
3723 A XYZ 2017-01-01 03:14:58 2017-01-04 03:43:28 2017-01-04 03:43:33
... ... ... ... ... ...
I am iterating through this DataFrame with itertuples() (I checked vectorization & .apply(), does not work here unfortunately). Now, beside other operations: I want to append the row (which is a namedtuple in my understanding) to another DataFrame with the same columns and keep the initial index, so it looks like this:
eqpid recpid queuetime trackintime trackouttime
... ... ... ... ... ...
3723 A XYZ 2017-01-01 03:14:58 2017-01-04 03:43:28 2017-01-04 03:43:33
In theory, the code should look something like this:
temp_df = pd.DataFrame(columns=df.columns)
for row in df.itertuples():
...
temp_df.append(row)
But this does not work, the temp_df remains empty. I also tried something like this:
temp_df = pd.DataFrame(columns=df.columns)
for row in df.itertuples():
...
temp = pd.DataFrame([row],columns = row._fields)
temp.set_index('Index', inplace = True)
temp_df.append(temp)
But even though when I print temp it looks like:
Index eqpid recpid queuetime trackintime trackouttime
3723 A XYZ 2017-01-01 03:14:58 2017-01-04 03:43:28 2017-01-04 03:43:33
The temp_df remains empty. Can someone give me a hint what to do or what my mistake is?
Thank you all in advance!
A:
Try to use 'iterrows' which returns rows as a tuple of index,series:
for idx,ser in df.iterrows():
...
temp_df = temp_df.append(ser)
The series itself contains the index, too, so the index alignment works.
|
[
"stackoverflow",
"0056551215.txt"
] | Q:
Pyspark converting between two date types
I am having trouble converting a column of dates with one format to another format in pyspark. I know that there is an easy way to get this achieved but don't know how. I already have them in the format of
2019-05-21T13:35:16.203Z
and I would like it to be in the format
6/10/2019 6:33:34 PM
Part of the problem is that I don't know what these formats are called for calling a spark dataframe function to do it.
A:
You need to use utc time stamp function, if you are trying to convert one of the column from your dataframe
and also you can specify what ever time zone you want to convert to in to_utc_timestamp method
Here is the working code
df = spark.createDataFrame([('2019-05-21T13:35:16.203Z',)], ['input_date'])
df_2 =df.select(df.input_date,date_format(to_utc_timestamp(df.input_date,""),'MM/dd/yyyy HH:mm:ss aaa').alias('output_date')).show(1, False)
+------------------------+----------------------+
|input_date |output_date |
+------------------------+----------------------+
|2019-05-21T13:35:16.203Z|05/21/2019 09:35:16 AM|
+------------------------+----------------------+
|
[
"stackoverflow",
"0004021154.txt"
] | Q:
Finding the index of a string in a tuple
Tup = ('string1','string2','string3')
My program returned string2 how do I get it's index within Tup?
A:
>>> tup.index('string2')
1
Note that the index() method has only just been added for tuples in versions 2.6 and better.
|
[
"dba.stackexchange",
"0000020848.txt"
] | Q:
SQL Server 2012 Resource Governor Enhancement
In SQL Server 2012 what specific enhacement was made regarding controlling memory usage by using Resource Governor. Up until SQL Server 2008 R2 resource governor was only able to control memory related to query memory grant.
I looked in SQL Server 2012 BOL however it does not specifically states anything clearly.
A:
This is a good question, because there is a lot of misleading information out there.
The changes to what memory Resource Governor can manage in SQL Server 2012 roughly coincide with the changes to what memory the Max Server Memory setting can manage. Prior to SQL Server 2012, this was just buffer pool / single-page allocations. In SQL Server 2012 this is extended to multi-page allocations. I lifted this diagram from Memory Manager surface area changes in SQL Server 2012:
And while the statement is a little over-promising IMHO, the SQL OS team also stated in New SQLOS features in SQL Server 2012:
Resource Governor governs all SQL memory consumption (other than special cases like buffer pool)
When I first started doing research on the changes to this feature, the indication I had was that it could manage all memory allocations except buffer pool and column store cache. Prior to SQL Server 2012, Resource Governor only had control over query grant memory.
There are of course other important enhancements to Resource Governor in SQL Server 2012, that I thought I'd mention for other readers who get here because of your question title. You can now:
have 64 resource pools instead of the previous limit of 20
use real CPU capping (even with no contention) - the previous model relied on concurrency before implementing a cap
implement scheduler / NUMA node affinity (e.g. tie a resource pool to a specific NUMA node)
Also note there are some differences with how memory manager changes have been implemented in 32-bit vs. 64-bit. Some relevant info here.
|
[
"electronics.stackexchange",
"0000306877.txt"
] | Q:
Do wires need to share similar characteristics in order for crosstalk to occur?
My understanding of crosstalk is how a radio transceiver works. You basically tune your transmitter and receiver to a similar frequency in order to get a cross-coupling of energy (the transmitter's energy gets coupling over to the receiver, hence the receiver is able to listen to what is being broadcasted). Is this analogous true for crosstalk in parallel unshielded, untwisted wires where the 2 wires have similar inductance and capacitance?
A:
Your question is a bit confusing since your title doesn't quite match what you asked. You also makes few statements that are incompletes.
You basically tune your transmitter and receiver to a similar frequency in order to get a cross-coupling of energy
Receiver and transmitter are often complex system including many subssystems. They includes modulator/demodulator, antennas, filters etc. So "tuning" them is a complex task that requires many parts to interract together.
Is this analogous true for crosstalk in parallel unshielded, untwisted wires where the 2 wires have similar inductance and capacitance?
A capacitance involve two elements. A single wire doesn't have a capacitance. It has a capacitance with something else.
This being said, to answer your title question, no, two wires doesn't need to share similar charateristic (I assume here that you refer the resistance and inductance) to have crosstalk.
See Wikipedia's definition :
In electronics, crosstalk is any phenomenon by which a signal transmitted on one circuit or channel of a transmission system creates an undesired effect in another circuit or channel. Crosstalk is usually caused by undesired capacitive, inductive, or conductive coupling from one circuit, part of a circuit, or channel, to another.
You need some sort of coupling between the two elements. In the case of long unshielded, untwisted wires, there is severals sources of coupling :
Capacitive coupling. The 2 wires act as a capacitors. Capacitor have low impedance at high frequency. A fast time-varying signal like an aggressive digital edge or a high frequency radio signal in one conductor would affect to other conductor. Capacitive coupling has nothing to do with electromagnetic waves.
Inductive coupling. Long wires next to each other also have an inductive coupling, so they act as transformer. Big currents in one conductor will create a magnetic fields that will induce an electromagnetic force in the other conductor.
ElectroMagnetic coupling (or interference). EMI will moslty be present if you have big changes of current in time. The length, material, shape of the wire will all affect its effectiveness to transmit/receive EM waves. Altough you seemed interested in "tuning" your wire to avoid interference, EMI is not restricted in bandwidth. A current surge in a wire will generates noise in a very wide frequency range. Even if you have an antenna tuned for a specific frequency, it will receives noise anyway.
Hope I could make things a little clearer.
|
[
"buddhism.stackexchange",
"0000020035.txt"
] | Q:
Abhava versus Vibhava
Is there a difference between abhava and vibhava? Or are those words synonyms?
When they are not synonyms, what is their exact meaning.
A:
'Vibhava' is a form of 'becoming' or 'self-view', namely, the idea "I do not exist" or "I do not want to exist" or "I will cease to exist". Since the idea of 'I' ('atta') or "a being/person" ('satta') still remains, 'vibhava' is a type of becoming (bhava). This is described in the following suttas:
Tayidaṃ, bhikkhave, tathāgato abhijānāti. Ye kho te bhonto samaṇabrāhmaṇā sato sattassa ucchedaṃ vināsaṃ vibhavaṃ paññapenti
te sakkāyabhayā sakkāyaparijegucchā sakkāyaññeva
anuparidhāvanti anuparivattanti. Seyyathāpi nāma sā gaddulabaddho
daḷhe thambhe vā khile vā upanibaddho, tameva thambhaṃ vā khilaṃ vā
anuparidhāvati anuparivattatianuparivattati evamevime bhonto
samaṇabrāhmaṇā sakkāyabhayā sakkāyaparijegucchā sakkāyaññeva
anuparidhāvanti anuparivattanti.
Those recluses that describe the annihilation, destruction & extermination (vibhavaṃ; non-existence) of an existing being (sattassa);
through fear & disgust with identity (sakkāya), keep running &
circling around that same identity; just as a dog bound by a leash to
a post keeps running & circling around that same post....
MN 102.12
Kathañca, bhikkhave, atidhāvanti eke? Bhaveneva kho paneke aṭṭīyamānā harāyamānā jigucchamānā vibhavaṃ abhinandanti—yato kira, bho, ayaṃ
attā kāyassa bhedā paraṃ maraṇā ucchijjati vinassati na hoti paraṃ
maraṇā; etaṃ santaṃ etaṃ paṇītaṃ etaṃ yāthāvanti. Evaṃ kho, bhikkhave,
atidhāvanti eke.
How, bhikkhus, do some overreach? Now some are troubled, ashamed and disgusted by this very same being and they rejoice in (the idea of)
non-being, asserting: ‘In as much as this self, good sirs, when the
body perishes at death, is annihilated and destroyed and does not
exist after death—this is peaceful, this is excellent, this is
reality!’ Thus, bhikkhus, do some overreach.
Iti 44
As for 'abhava', I do not currently know what it means in Pali Buddhism.
'Abhava' appears to not have the same meaning as 'abhavissā', which seems to mean: 'if was' or 'would have been', as found in the 2nd sermon:
Rūpañca hidaṃ, bhikkhave, attā abhavissa, nayidaṃ rūpaṃ ābādhāya saṃvatteyya
For if, bhikkhus, form were self, this form would not lead to affliction...
'Vibhava' is not a synonym for Nibbana or the Unconditioned. MN 140 states:
So nevaneva taṃ abhisaṅkharoti, na abhisañcetayati bhavāya vā vibhavāya vā. So anabhisaṅkharonto anabhisañcetayanto bhavāya vā
vibhavāya vā na kiñci loke upādiyati, anupādiyaṃ na paritassati,
aparitassaṃ paccattaṃyeva parinibbāyati
He does not form any condition or generate any volition tending towards either being or non-being. Since he does not form any
condition or generate any volition tending towards either being or
non-being, he does not cling to anything in this world. When he does
not cling, he is not agitated. When he is not agitated, he personally
attains Nibbāna.
In Hinduism, 'abhava' appears to refer to a primordial unconditioned state, namely, unmanifest level from where the concrete bhava arises or emerges.
'Abhava' here, in Hinduism, is not a synonymous with 'vibhava' in Buddhism because, in Buddhism, 'bhava' ('becoming') must occur before 'vibhava' can occur.
The idea of 'I' is 'bhava'. The idea of "I" must exist before the idea of "I do not want to exist" can occur. For example, the idea of "I", "me" or "self" must occur before the wish to commit suicide.
|
[
"stackoverflow",
"0019284321.txt"
] | Q:
How do I get a bold version of UIFont's prefferedFontForTextStyle?
In iOS 7, users can manipulate their fonts from the control panel, something designed to help (amongst other things) visually impaired users.
I'm trying to work with the new paradign by using the new methods created to support that functionality. For the most part, it's easy enough -- just use [label setFont:[UIFont prefferedFontForTextStyle:UIFontTextStyleHeadline]] or whatever format you need.
But occasionally I need to adjust those. For example, maybe headline needs to be a bit bigger. I can use this answer to that. Unfortunately, I can't figure out how to apply that answer to other changes, such as simply bolding the font without changing the size.
A:
You can try this:
UIFontDescriptor *descriptor = [UIFontDescriptor preferredFontDescriptorWithTextStyle:UIFontTextStyleHeadline];
/// Add the bold trait
descriptor = [descriptor fontDescriptorWithSymbolicTraits:UIFontDescriptorTraitBold];
/// Pass 0 to keep the same font size
UIFont *font = [UIFont fontWithDescriptor:descriptor size:0];
|
[
"stackoverflow",
"0018367929.txt"
] | Q:
How Can I Pass The Item Bound To A KoGrid Cell To The ViewModel
HTML:
<div data-bind="koGrid: gridOptions" style="height:600px;border:solid 1px #ccc;"></div>
JS:
Column definitions:
{ field: 'orderCatalogUpdateID', cellTemplate: '<button data-bind="click: $userViewModel.removeItem">X</button>', displayName: ' ', width: '2%' }`
removeItem function on the ViewModel:
self.removeItem = function (item) {
self.list.remove(item);
}
The item that gets passed to the removeItem function is not the data item that is bound to the row but rather the KoGrid column. How can I get the data item that is bound to the row so I can pass it to the remove function on the observable array?
I have tried wiring up click events with jQuery and a variety of cell templates trying to pass in the data item being bound to the row with no success.
A:
By default the current datacontext gets passed to the click handler which is the current column object as described in the documentation:
$data: kg.Column: //the column entity
What you need to pass in is the $parent.entity: //your data model which is the current row entity.
So you need to change your binding:
{
field: 'orderCatalogUpdateID',
cellTemplate: '<button data-bind="click: ' +
' function() { $userViewModel.removeItem($parent.entity); }">X</button>',
displayName: ' ',
width: '2%'
}
Demo JSFiddle.
|
[
"stackoverflow",
"0033596408.txt"
] | Q:
Need example of using fabricjs.d.ts with TypeScript
I use fabricjs in a project I'm attempting to convert to use TypeScript but I can't figure out how to use it. Previously I'd create my own custom objects by doing the following:
my.namespace.Control = fabric.util.createClass(fabric.Object, {
id: "",
type: 'Control',
color: "#000000",
...
});
In my new project, I've installed the TypeDefinition file from here but I can't figure out how I should use it?
Looking at the .d.ts file, fabric.Object doesn't appear to be a Function so isn't allowed to be passed to createClass, and createClass itself returns void, so I can't assign the value to a variable.
Even if all this worked, how should I format this so it works the TypeScript way? ie, what do I export so that I can import it elsewhere where the Control class is needed?
Anyone actually got any examples of doing this?
A:
The old way of using your fabricjs.d.ts is at the top of your file :
/// <reference path="path/to/your/fabricjs.d.ts" />
If you get it with tsd you have to reference your tsd.d.ts bundle file
But now with typescript 1.6.2 and vscode you only need to download the reference file via tsd to get the definitions in your code.
In the d.ts file you can found the 2 methods signature :
createClass(parent: Function, properties?: any): void;
createClass(properties?: any): void;
So your code seems invalid because fabric.Object as interface IObjectStatict cannot be asignable to type Function. You can cast fabric.Object as a Function doing
my.namespace.Control = fabric.util.createClass(<Function>fabric.Object, {
id: '',
type: 'Control',
color: '#000000',
...
});
|
[
"stackoverflow",
"0050777645.txt"
] | Q:
How can I combining Webpack project output with Asp.Net project output in VSTS build?
My repository contains an Asp.Net app, and a React app in seperate folder. I need to do a deployment to an Azure App service from a VSTS release.
Repository Root
MyAspNetApp
MyReactApp
The Asp.Net application is an MVC application. If it detects you on mobile it servers up the react app.
The react app is built using WebPack. When you do a production build, it copies the output into a folder called 'app' in the MyAspNetApp project. The production build can be run via 'npm run build-prod'.
When I was doing git deployments (kudu), I just added a command to the deploy.cmd to call 'npm install' and 'npm run build-prod'. Then another command to copy those files to the root of the deployment directory ('wwwroot').
Now that I am using VSTS to build and deploy (separate steps), I can't figure out how to get that 'app folder into wwwroot. In a build step I tried taking the stuff from the 'app' folder and putting it in an artifact called 'mobile'. Then in a deployment step, using a 'Copy Files' step to copy the 'mobile' artifact to $(build.artifactstagingdirectory)/app, but they don't show up in wwwroot on azure.
What am I missing here?
edit: cross posted here in MS VS Community site in hopes of getting a response. I will update this post if I get an answer there.
A:
With Azure App Service Deploy task, if you check Publish using Web Deploy option, you need to put all necessary files in a zip file and specify this file in Package or folder input box.
You also can uncheck Publish using Web Deploy option and specify the root folder path of app.
Refer to these steps to do it:
Publish MVC application with File System publish method through Visual Studio build task
Run NPM commands to build React app through NPM task
Copy react app’s built files to necessary folder of MVC app deployed folder
(optional) Zip folder through Archive Files if you want to publish using web deploy
Add Azure App Service Deploy task (can be in release) and specify package or folder.
|
[
"stackoverflow",
"0032408000.txt"
] | Q:
Nuget package sources not being saved
My NuGet package manager cannot save new or find existing package sources.
The location where the package sources are saved %appdata%\NuGet\nuget.config is not getting updated since the problem started. Deleting it does not re-create it when I try adding a new package source through the Visual Studio -> Package Manager Sources dialog.
"Package Manager sources" dialog always shows empty on trying to save something and re-opening the dialog.
This problem started on my laptop after changing my corporate password from my office desktop computer (which does not have this issue).
Re-installing "Nuget Package Manager" did not help. Re-installing Visual Studio did not help.
A:
My problem is fixed :)
The step of deleting %appdata%\NuGet\nuget.config may have helped along with other steps.
Restarting Visual Studio re-generated this file with nuget.org package source added. I am almost sure I had tried this before but it had not worked before.
The only other major difference was that I was not connected to corporate VPN network when restarting Visual Studio.
After searching everywhere, doesn't look like anyone has faced (or at-least asked) this issue. Hope this posting helps someone in the future.
|
[
"mechanics.stackexchange",
"0000057651.txt"
] | Q:
What would cause repeated failure of ignition coils?
I have a Nissan 2008 Rogue, which is fired by ignition coil. I've been experiencing repeated singular ignition coil failures every 1-2 months now for the past 6 months. I'm on my 6th failure, never the same one, always different. Does anyone have any troubleshooting steps that could help me here?
A:
I will post what I found here and let others add on as they desired. A helpful Youtube video:Youtube. His statement regarding high resistance in the secondary, made me think to pull the spark plugs and check the gaps. When I pulled the plug, I could tell right away the plug gaps were wrong (that's how bad they were .1" instead of .04"). I'm pretty disappointed since it has barely been 30K miles on these plugs, but I think the plugs were the culprit.
|
[
"stackoverflow",
"0012431451.txt"
] | Q:
D or Go for clustered game server
I'm designing a game, but this question is applicable to any situation that requires bidirectional communication between nodes in a cluster and a main server. I am pretty new to clusters, but I actively program in Go and occasionally in D.
I really want to use a modern language (not C/C++), so I've chosen these two languages because:
Array slices
Good concurrency support
Cross platform & compiles natively (with multiple compiler implementations)
GC (both working on a precise GC)
I've read https://stackoverflow.com/questions/3554956/d-versus-go-comparison and The D Programming Language for Game Development.
At a high level, my game will do most of the processing server side, with the client just rendering the game state from their perspective. The game is designed to scale, so it will need to act in a cluster. Components are mostly CPU bound, and update to a main server asynchronously, which shares game state with clients. Most computation depends on user input, so these events need to be sent down to individual components (hence bi-directional RPC).
Reasons I like D:
Manual memory management
Templates/CTFE
Code safety (@safe, contracts, in/out)
Reasons I like Go:
Standard library (pprof, RPC)
Go routines
go tool (esp. go get -u to install/update remote dependencies)
The client will likely be written in D, but that shouldn't have an impact on the server.
I am leaning towards D because manual memory management is baked into the language. While it doesn't have the nice libraries for RPC, I could theoretically implement that, but I cannot as elegantly implement manual memory management in Go.
Which would you use for this problem given the choice between the two languages?
A:
I expect that either will work and that a lot of it depends on which you prefer, though if you're doing the client in D, I'd advise doing the server in D simply because then there are fewer languages involved. If you use two languages, then anyone working on your project generally has to know them both, and both Go and D are small enough in terms of their user base at this point that few people will know both - though if it's just you working on it, you obviously know both of them already.
However, I would point out that if the problem with using D is the lack of an RPC library, then that isn't a problem, because D is supported by Apache Thrift. So, D does have a solid RPC library, even if it's not in its standard library (in fact, it was one of the fruits of D's first round of participation in Google's Summer of Code).
|
[
"stackoverflow",
"0002009055.txt"
] | Q:
.csproj File - Programmatic adding/deleting files
I've written a C# code generator to generate entities and related class files in our Ling-To-SQL application. The code generator needs to add/delete files in TFS and also add/delete them from a .csproj file so they are included or excluded properly in their respective projects.
I've figured out the TFS interaction but was curious as to the best or standard way of programmatically adding/removing files from .csproj files. Anyone have any experience with this?
Thanks - Randy
A:
.csproj files are just XML, and they conform to an XSD. Adding the new XML elements should be all it takes.
To find the XSD take a look at: Where can I find the schema (XSD) for a .csproj file?
A:
Another option is to use the Visual Studio automation model. From inside Visual Studio you can modify the project using macro code (VBA). This page on MSDN has links to the main automation libraries documentation.
Doing this, you could create a very slick integration. You could for instance have a Visual Studio macro that kicks off you code generation process and then adds the resulting files to the project.
|
[
"stackoverflow",
"0012816874.txt"
] | Q:
How to render into different layers and then combine them together with OpenGL?
I am rendering several layers, which cannot be rendered on just one frame buffer because the modifications would affect all the other layers under it.
How can I render these layers separately in such a way that I can combine them into one final layer? The layers would be rendered with transparency, so when combining them, they would be blended accordingly to all the layers below.
I am currently using FBO to offscreen render them all in one layer, but as I said above, it's not going to work very well when the top-most layer affects all the bottom layers as well.
So how can I combine two (or more? (whichever way is faster)) FBO's (or some better method for FBO?) together as efficiently as possible? Currently I could render them one by one, put into my RAM and then combine them per pixel basis myself, but that seems like a slow method.
What's the fastest way doing this?
A:
You can use MRT (multiple render targets). You will need to create and bind framebuffer (seems you already know how to do it), then attach several textures to it like
glFramebufferTexture2DEXT ( GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, texA, 0 );
glFramebufferTexture2DEXT ( GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT1_EXT, GL_TEXTURE_2D, texB, 0 );
Here I attached two textures (texA & texB, which were created using glGenTextures), as color attachments number 0 and 1. Then you just write your colors in shader using not gl_FragColor output variable, but rather gl_FragData[0] for the first color attachment and gl_FragData[1] for the second.
You can then use second pass to combine images stored in texA and texB textures.
P.S. these are the function calls for OpenGL 2. If you use OpenGL 3, the functions calls are similar (just without EXT), but you'll need to specify outputs for your shader manually. I can post the solution for this if necessary.
|
[
"drupal.stackexchange",
"0000018282.txt"
] | Q:
How to ask for user confirmation when submitting a form using FAPI
What I want is suppose a user enters same data twice in two rows then I want to show her an error/warning and now if she chooses to press submit again then I'd process and accept the form.
How to do it?
A:
use the confirm_form function. See Here and Here for examples
This function returns a complete form array for confirming an action.
The form contains a confirm button as well as a cancellation link that
allows a user to abort the action.
|
[
"stackoverflow",
"0012987525.txt"
] | Q:
Meteor test driven development
I don't see how to do test driven development in meteor.
I don't see it mentioned anywhere in documentation or FAQ. I don't see any examples or anything like that.
I see that some packages are using Tinytest.
I would need response from developers, what is roadmap regarding this. Something along the lines of:
possible, no documentation, figure it out yourself
meteor is not built in a way that you can make testable apps
this is planned feature
etc
A:
Update 3: As of Meteor 1.3, meteor includes a testing guide with step-by-step instructions for unit, integration, acceptance, and load testing.
Update 2: As of November 9th, 2015, Velocity is no longer maintained. Xolv.io is focusing their efforts on Chimp, and the Meteor Development Group must choose an official testing framework.
Update: Velocity is Meteor's official testing solution as of 0.8.1.
Not much has been written about automated testing with Meteor at this time. I expect the Meteor community to evolve testing best-practices before establishing anything in the official documentation. After all, Meteor reached 0.5 this week, and things are still changing rapidly.
The good news: you can use Node.js testing tools with Meteor.
For my Meteor project, I run my unit tests with Mocha using Chai for assertions. If you don't need Chai's full feature set, I recommend using should.js instead. I only have unit tests at the moment, though you can write integration tests with Mocha as well.
Be sure to place your tests in the "tests" folder so that Meteor does not attempt to execute your tests.
Mocha supports CoffeeScript, my choice of scripting language for Meteor projects. Here's a sample Cakefile with tasks for running your Mocha tests. If you are using JS with Meteor, feel free to adapt the commands for a Makefile.
Your Meteor models will need a slight bit of modification to expose themselves to Mocha, and this requires some knowledge of how Node.js works. Think of each Node.js file as being executed within its own scope. Meteor automatically exposes objects in different files to one another, but ordinary Node applications—like Mocha—do not do this. To make our models testable by Mocha, export each Meteor model with the following CoffeeScript pattern:
# Export our class to Node.js when running
# other modules, e.g. our Mocha tests
#
# Place this at the bottom of our Model.coffee
# file after our Model class has been defined.
exports.Model = Model unless Meteor?
...and at the top of your Mocha test, import the model you wish to test:
# Need to use Coffeescript's destructuring to reference
# the object bound in the returned scope
# http://coffeescript.org/#destructuring
{Model} = require '../path/to/model'
With that, you can start writing and running unit tests with your Meteor project!
A:
Hi all checkout laika - the whole new testing framework for meteor
http://arunoda.github.io/laika/
You can test both the server and client at once.
See some laika example here
See here for features
See concept behind laika
See Github Repository
Disclaimer: I'm the author of Laika.
A:
I realize that this question is already answered, but I think this could use some more context, in the form of an additional answer providing said context.
I've been doing some app development with meteor, as well as package development, both by implementing a package for meteor core, as well as for atmosphere.
It sounds like your question might be actually a question in three parts:
How does one run the entire meteor test suite?
How does one write and run tests for individual smart packages?
How does one write and run tests for his own application?
And, it also sounds like there may be a bonus question in there somewhere:
4. How can one implement continuous integration for 1, 2, and 3?
I have been talking and begun collaborating with Naomi Seyfer (@sixolet) on the meteor core team to help get definitive answers to all of these questions into the documentation.
I had submitted an initial pull request addressing 1 and 2 to meteor core: https://github.com/meteor/meteor/pull/573.
I had also recently answered this question:
How do you run the meteor tests?
I think that @Blackcoat has definitively answered 3, above.
As for the bonus, 4, I would suggest using circleci.com at least to do continuous integration for your own apps. They currently support the use case that @Blackcoat had described. I have a project in which I've successfully gotten tests written in coffeescript to run unit tests with mocha, pretty much as @Blackcoat had described.
For continuous integration on meteor core, and smart packages, Naomi Seyfer and I are chatting with the founder of circleci to see if we can get something awesome implemented in the near term.
|
[
"pt.meta.stackoverflow",
"0000005050.txt"
] | Q:
Como aumentar a minha Reputação?
Eu entrei na comunidade agora, sou aficionado com tecnologia. Queria saber como aumentar a reputação para poder fazer mais coisas nesta comunidade!
Grato.
A:
Aficionado com tecnologia ou tecnologia específica? Se for específica tem mais chances de ganhar pontos, eu mesmo respondo muitas perguntas de HTML, CSS, JavaScript e PHP que são mais da minha área, tem gente que responde muitas perguntas sobre C++ e C e assim vai.
Aqui no site, no Help já tem uma explicação de como ganhar pontos:
O que é reputação? Como faço para ganhar (ou perder) pontos
Segue um resumo:
Você pode ganhar no máximo 200 pontos de reputação por dia de qualquer combinação das atividades a seguir (Apenas as gratificações/recompensas recebidas e as respostas aceitas não estão sujeitas ao limite diário de reputação).
Você ganha reputação quando:
Uma pergunta sua recebe um voto a favor: +10 pontos.
Uma resposta sua recebe um voto a favor: +10 pontos.
Uma resposta sua é marcada como “aceita”: +15 pontos.
Se você aceitar uma resposta de outra pessoa na sua pergunta: +2 pontos (se a resposta for sua própria não ganha pontos).
Uma edição sugerida é aceita: +2 pontos (usuários com 2000 pontos ou mais não ganham pontos por edições).
Sua resposta recebeu uma gratificação/recompensa: a gratificação é definida por quem esta oferecendo que pode ser de 50 até 500 pontos.
Sua resposta recebeu uma gratificação/recompensa automaticamente: recebe metade da recompensa (consulte mais detalhes sobre o funcionamento das gratificações).
Como buscar perguntas sem resposta
Como eu expliquei em Qual a diferença de isaccepted e hasaccepted? você pode procurar perguntas que não tenham respostas aceitas usando o buscador digitando hasaccepted:no (neste caso tente fazer uma resposta melhor do que as existentes), clique para testar:
https://pt.stackoverflow.com/search?q=hasaccepted%3Ano
Você pode digitar no buscador pra procurar perguntas sem resposta digite isto na busca answers:0, clique aqui pra testar:
https://pt.stackoverflow.com/search?q=answers%3A0
Se quiser buscar por uma tecnologia específica adicione algo como isto na busca [NOME DA TAG QUE DESEJO], assim por exemplo:
https://pt.stackoverflow.com/search?q=%5Bhtml%5D
Então pode combinar os termos na busca, por exemplo:
Perguntas sobre HTML sem resposta:
https://pt.stackoverflow.com/search?q=%5Bhtml%5D+answers%3A0
Perguntas sobre HTML e CSS sem resposta:
https://pt.stackoverflow.com/search?q=%5Bhtml%5D+%%5Bcss%5D+answers%3A0
|
[
"math.stackexchange",
"0001274939.txt"
] | Q:
Varieties and ideals
I'm doing the exercises from Fulton of Algebraic Geometry and I'm stuck in the problem 2.44
Let $V$ be a variety in $\mathbb{A}^{n}$, $I=I(V)\subset k[x_{1},\ldots,x_{n}]$, $P\in V$ and let $J$ be and ideal $k[x_1,\ldots,x_n]$ which contains $I$. Let $J'$ be the image of $J$ in $\Gamma(V)$. Prove that exists a natural homomorphism $\varphi$ from $\mathcal{O}_{P}(\mathbb{A}^{n})/J\mathcal{O}(\mathbb{A}^{n})$ to $\mathcal{O}_{p}(\mathbb{V})/J'\mathcal{O}(\mathbb{V})$, and prove that $\varphi$ is an isomorphism. In particular, $\mathcal{O}_{P}(\mathbb{A}^{n})/I\mathcal{O}(\mathbb{A}^{n})$ is isomorphic to $\mathcal{O}_{P}(V)$
If anyone can help I'll really appreciate it :)
A:
Let $R=k[x_1,\ldots,x_n]$ so I don't have to keep writing it.
I'm going to construct this explicitly, although I'm going to do it in two stages.
First we need a map $\psi$ from $O_P(\Bbb{A}^n)\to O_P(V)$.
Define $\psi(f/g)=\frac{\overline{f}}{\overline{g}}$, where $\overline{f}$ is the image of $f$ under the canonical projection from $R$ to $R/I$ and similarly for $g$. Since $g(P)\ne 0$, and $P\in V$, $g\not \in I$, so $\overline{g}\ne 0$. Thus this is a well defined map, and I will let you check that it is a surjective homomorphism.
Then let $\tau: O_P(V)\to O_P(V)/J'O_P(V)$ be the usual projection. $\tau$ is a surjective homomorphism as well.
Then let $\sigma= \tau \circ \psi : O_P(\Bbb{A}^n) \to O_P(V)/J'O_P(V)$.
Since $\tau$ and $\psi$ are surjective homomorphisms, $\sigma$ is as well. We just need to find the kernel of $\sigma$.
Suppose $f/g$ maps to 0 under $\sigma$. Then
$$\frac{\overline{f}}{\overline{g}}\in J'O_P(V).$$
Multiplying by $\overline{g}$ we have $\overline{f}\in J'O_P(V)$.
Then $$\overline{f}=\sum_{i=1}^n\frac{\overline{j_i}}{\overline{g_i}}$$ for $\overline{j_i}\in J'$, $\frac{1}{\overline{g_i}}\in O_P(V)$ such that $\overline{g_i}$ is the representative of some $g_i\in R$ with $g_i(P)\ne 0$ (note this is possible by absorbing the numerator into the $\overline{j_i}$).
Now multiplying by all the $\overline{g_i}$ we have
$$\overline{f}\prod_i \overline{g_i} \in J'$$ since $J'$ is an ideal and the denominators all cancel,
so $$f\prod_i g_i=\alpha$$ for some $\alpha$ in $J$ since $I\subset J$.
Finally dividing by the product of the $g_i$ since the $g_i$ are invertible in $O_P(\Bbb{A}^n)$ gives that $f\in JO_P(\Bbb{A}^n)$. Then $f/g \in JO_P(\Bbb{A}^n)$, so $\ker \sigma \subset JO_P(\Bbb{A}^n)$.
We are almost done. To show $JO_P(\Bbb{A}^n)\subset \ker \sigma$, we note that if $a=\sum_{i} j_ia_i$ for $j_i\in J$, $a_i\in O_P(\Bbb{A}^n)$, $\sigma(a)=0$ since $\psi(j_i)\in J'$, so $\sigma(j_i)=0$.
Therefore $\ker\sigma=JO_P(\Bbb{A}^n)$, so the desired natural isomorphism exists by the first isomorphism theorem.
|
[
"stackoverflow",
"0043648063.txt"
] | Q:
Fortran MPI code open different files with the same unit number
I'm running a fortran code in multi processes using Open MPI. Each process needs to open and write many files. During the run time, it's possible that two different processes will open and write different files with the same unit number concurrently.
processA: open(unit=10, file1)
processB: open(unit=10, file2)
Will this cause a problem?
A:
Yes, it is possible, and no it should not cause problems. The MPI processes all live on their own and are not aware of the memory (and therefore unit numbers) of other processes. Though you should be careful to not create too many files, if you use thousands of processes you may run into limitations of the filesystem.
|
[
"stackoverflow",
"0039063269.txt"
] | Q:
Change canvas size in xmgrace (not dragging)
I am trying to plot something on xmgrace, but I encountered a problem when I modified the size of the axis label and tick label font: the size of the fonts is too big and it pushes the labels out of the page, so that when I try to print i get the following error:
[Error] Output is truncated - tune device dimensions
The following picture is an example of the problem (the y axis label is slightly out of the page):
Now, to solve this I could double-click on the black squares on the corners of the canvas (red circle in the picture) and drag the canvas' border to make the label fit. But this is not precise enough because it has to be done manually.
How can I change the canvas dimensions more precisely, i.e. from keyboard?
A:
You can change the dimensions of the plot by clicking
Plot > Graph Appearance
The "Viewport" settings control the start and end positions of the x and y axes relative to the edges of the canvas.
For example, to avoid your y axis label being cut off, increase Xminby a small amount (default value is 0.15, try changing it to 0.2). You might also want to change Xmax by the same increment in order to preserve your graph's aspect ratio.
|
[
"rpg.stackexchange",
"0000110678.txt"
] | Q:
Does a sole natural weapon attack count as two handed when using power attack?
Does a sole natural weapon attack count as two handed when using power attack for the purpose of determining damage? For the dnd 3.5 system?
Power attack states:
On your action, before making attack rolls for a round, you may choose to subtract a number from all melee attack rolls and add the same number to all melee damage rolls. This number may not exceed your base attack bonus. The penalty on attacks and bonus on damage apply until your next turn.
If you attack with a two-handed weapon, or with a one-handed weapon wielded in two hands, instead add twice the number subtracted from your attack rolls. You can’t add the bonus from Power Attack to the damage dealt with a light weapon (except with unarmed strikes or natural weapon attacks), even though the penalty on attack rolls still applies.
Natural attack states:
A creature’s primary natural weapon is its most effective natural attack, usually by virtue of the creature’s physiology, training, or innate talent with the weapon. An attack with a primary natural weapon uses the creature’s full attack bonus, and its damage includes its full Strength modifier (1-1/2 times its Strength bonus if the attack is with the creature’s sole natural weapon).
Does this mean that for the purposes of power attack a creature with only one natural attack would count this attack as being “two handed” for the purposes of applying 2X the amount subtracted from the attack roll to the damage?
A:
It is common enough to houserule them to do so, to mirror the 1½ Str bonus they receive to damage, but the official rules do not specify this anywhere, so as written natural attacks do not get the 2:1 damage bonus that two-handers do. They receive only the usual 1:1 bonus, and that only because they are excepted from the clause that specifies that light weapons receive no bonus.
|
[
"electronics.stackexchange",
"0000173731.txt"
] | Q:
Choose the better values (in terms of range) for resistors in this non-inverting op-amp circuit
These days I'm looking at operational amplifiers; from what I've seen, implementing them in a circuit is quite simple, at least when they are connected as "non-inverting". Determining the gain/amplification is possible by doing a calculation of two resistors, R1 and R2 (should R2 be called a "feedback resistor"?)
(The image is taken from http://mustcalculate.com/electronics/noninvertingopamp.php.)
Let me do a practical example to explain where my questions are:
In my example I choose to implement an op-amp (for example, the TLV272, which is also "rail to rail") as "non inverting amplifier". Then I want to increase a voltage of 10 volt to 15 volt (to be sure, I will feed the op-amp with a power supply of 15 volt). Well: by the equation I have to choose a value of 20 kΩ for R1 and a value of 10 kΩ for R2, which is equal to an amplification of 3.522 dB (voltage gain 1.5).
OK, but I could also do the same by choosing R1 as 200 kΩ and R2 as 100 kΩ, or increase these values until R1 of 200 MΩ and R2 of 100 MΩ (or at the totally opposite: R1 of 2 milliohm and R2 of 1 milliohm): in all these cases I will still have a gain of 1.5, but with totally different ranges of resistors, in terms of values.
I can't understand the criteria (in terms of range) how these resistors should be choosen. Maybe this criteria is related to the kind of signal which the op-amp will have to manipulate on his input? Or what else? And in practical example, which will be the difference if I increase a signal using "R1 = 2 kΩ R2 = 1 kΩ" and "R1 = 200 MΩ R2 = 100 MΩ"?
EDIT:
I've seen that my question has been edited, also to correct my grammar: thank you. I'm sorry for my misspellings, but english is not my main language. Next time, I will do an attempt to be more accurate in my grammar.
A:
As you have figured out, the gain is only a function of the ratio of the two resistors. Therefore, at first glance, 2 kΩ / 1 kΩ, and 2 MΩ / 1 MΩ are equivalent. They are, ideally, in terms of gain, but there are other considerations.
The biggest obvious consideration is the current that the two resistors draw from the output. At 15 V out, the 2kΩ/1kΩ combination presents a load of 3 kΩ and will draw (15 V)/(3 kΩ) = 5 mA. The 2MΩ/1MΩ combination will likewise only draw 5 µA.
What does this matter? First, you have to consider whether the opamp can even source 5 mA in addition to whatever load you want it to drive. Perhaps 5 mA is no problem, but obviously there is a limit somewhere. Can it source 50 mA? Maybe, but probably not. You can't just keep making R1 and R2 lower, even keeping their ratio the same, and have the circuit continue to work.
Even if the opamp can supply the current for the R1+R2 value you picked, you have to consider whether you want to spend that current. This can be a real issue in a battery operated device. 5 mA continuous drain may be a lot more than the rest of the circuit needs, and the major reason for short battery life.
There are other limits too at high resistances. High impedance nodes in general are more susceptible to picking up noise, and high-value resistor have more inherent noise.
No opamp is perfect, and its input impedance not zero. The R1 and R2 divider form a voltage source of impedance R1//R2 driving the inverting input of the opamp. With 2MΩ/1MΩ, this parallel combination is 667 kΩ. That needs to be small compared to the opamp's input impedance else there will be significant offset error. The opamp input bias current must also be taken into account. For example, if the input bias current is 1 µA, then the offset voltage caused by the 667 kΩ source driving the input is 667 mV. That's a large error unlikely to be acceptable.
Another problem with high impedance is low bandwidth. There will always be some parasitic capacitance. Let's say for example that the net connected to the two resistors and the inverting input has 10 pF capacitance to ground. With 667 kΩ driving it, you have a low pass filter at only 24 kHz. That may be acceptable for a audio application, but a serious problem in many other applications. You might be getting a lot less gain at high frequencies than you expect from the gain-bandwidth product of the opamp and the feedback gain.
As with everything in engineering, it's a tradeoff. You have two degrees of freedom in chosing the two resistors. The gain you want only nails down one degree. You have to trade off the current requirements and output impedance to decide the second.
A:
As mentioned above, low value feedback resistors have relatively high current which the amplifier must drive. In an inverting amplifier, Rin sets the input impedance, so it is best not to have too low a value because the signal source must drive this.
At the other end of the scale, very large resistors not only generate noise (thermal or Johnson noise), but due to the natural capacitance* of the part, they form a filter in the feedback loop, which at worst can undermine the loop stability of the amplifier. Quite apart from changing the ac response of your circuit in interesting and hair-pulling ways, this effect gets worse at lower gains, and at gains of below 4 (typically, depends on the specific amplifier) can bite quite painfully. Indeed, there are numerous amplifiers designed specifically to have a minimum gain and are unstable below this gain (the advantages include better transient specifications).
As a general rule, I limit feedback resistors to no more than ~220k for either inverting or non-inverting configurations. If this does not yield sufficient gain, use an extra gain stage.
There are tricks one can do (a T network of resistors in the feedback loop is a well known one) to raise the gain of a single stage, but amplifiers are cheap and take up negligible space.
In inverting topologies, the choice of feedback resistor is primarily driven by the requirements of the signal source which sets the input resistor (usually minimum) size.
This becomes clear when one defines capacitance as existing between any two points of differing electrical potential
HTH
A:
To give a really short answer: something in the range of tens of kΩs will probably be good (with most OP-amp models and for most applications). Try 40 kΩ for R1 and 20 kΩ for R2.
This is of course not ideal in all circumstances, but it should usually work fine with a reasonable tradeoff between power consumption and noise level. Olin Lanthrop and Peter Smith have explained in detail what disadvantages you get with too high or too low resistance values.
|
[
"stackoverflow",
"0043045351.txt"
] | Q:
Getting sub links of a URL using jsoup
Consider a URl www.example.com it may have plenty numbers of links ,some may be internal and other may be external.I want to get a list of all the sub links ,not even the sub-sub links but only sub link.
E.G if there are four links as follows
1)www.example.com/images/main
2)www.example.com/data
3)www.example.com/users
4)www.example.com/admin/data
Then out of the four only 2 and 3 are of use as they are sub links not the sub-sub and so on links .Is there a way to achieve it through j-soup..If this could not be achieved through j-soup then one can introduce me with some other java API.
Also note that it should be a link of the parent Url which is initially sent(i.e. www.example.com)
A:
If i can understand a sub-link can contain one slash you can attempt with this with counting the number of slashes for example :
List<String> list = new ArrayList<>();
list.add("www.example.com/images/main");
list.add("www.example.com/data");
list.add("www.example.com/users");
list.add("www.example.com/admin/data");
for(String link : list){
if((link.length() - link.replaceAll("[/]", "").length()) == 1){
System.out.println(link);
}
}
link.length(): count the number of characters
link.replaceAll("[/]", "").length() : count the number of slashes
If the difference equal to one then right link else no.
EDIT
How will i scan the whole website for sub links?
The answer for this with the robots.txt file or Robots exclusion standard, so in this it define all the sub-links of the web site for example https://stackoverflow.com/robots.txt, so the idea is, to read this file and you can extract the sub-links from this web-site here is a piece of code that can help you :
public static void main(String[] args) throws Exception {
//Your web site
String website = "http://stackoverflow.com";
//We will read the URL https://stackoverflow.com/robots.txt
URL url = new URL(website + "/robots.txt");
//List of your sub-links
List<String> list;
//Read the file with BufferedReader
try (BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream()))) {
String subLink;
list = new ArrayList<>();
//Loop throw your file
while ((subLink = in.readLine()) != null) {
//Check if the sub-link is match with this regex, if yes then add it to your list
if (subLink.matches("Disallow: \\/\\w+\\/")) {
list.add(website + "/" + subLink.replace("Disallow: /", ""));
}else{
System.out.println("not match");
}
}
}
//Print your result
System.out.println(list);
}
This will show you :
[https://stackoverflow.com/posts/, https://stackoverflow.com/posts?,
https://stackoverflow.com/search/, https://stackoverflow.com/search?,
https://stackoverflow.com/feeds/, https://stackoverflow.com/feeds?,
https://stackoverflow.com/unanswered/,
https://stackoverflow.com/unanswered?, https://stackoverflow.com/u/,
https://stackoverflow.com/messages/, https://stackoverflow.com/ajax/,
https://stackoverflow.com/plugins/]
Here is a Demo about the regex that i use.
Hope this can help you.
|
[
"meta.stackexchange",
"0000211426.txt"
] | Q:
Should beginner questions be offered optimized code?
So, regarding this question. I answered that to append to a JTextArea, you simply don't use the setText() method, you use append(). As the OP was obviously not a skilled programmer, I thought a simple answer would do. Why complicate things with discussion of what seems to me to be premature optimization?
The other answer thought differently (not mentioning the lack of description :P). Should I be offering the most optimal solution even a slight cost to clarity?
A:
Usually for questions with an easy, perfectly fine (read not open to SQL attacks etc) not great answer and an advanced solution thats beyond the abilities of the OP I usually answer with a main answer section and an "Ideas for improvement" section. This gives the best of both worlds.
This is useful both for the OP (as they can stretch themselves if they choose) and for future visitors who may be able to implement the advanced solution
A:
Regarding "Ideas for improvement", if there's just a few couple of things and you have time/energy to add them then feel free to do so. Otherwise, there is a whole other StackExchange site dedicated for that:
Code Review
If you feel that the user could use some more in-depth explanations about how to write cleaner / more optimized code, feel free to redirect them to Code Review. However, be sure that you read Code Review's about page before asking them to go there. In summary, Code Review is for:
Working code (once they got their problem fixed on StackOverflow, they can go to Code Review to make their code even better), or code that is believed to be working.
Optimizing speed of code
Improving the cleaniness of the code
Code Review is not for:
Fixing problems with existing code
Understanding what the code does (Teachers are better at this)
Helping users debug errors or incorrect results
Improving code written by someone other than the asker
Questions of low quality
|
[
"apple.stackexchange",
"0000112840.txt"
] | Q:
How To Turn Off Mac’s Monitor screen
How can I turn off tmy Mac's monitor. I know I can turn off the display by setting a hot cornet or by using ctrl + shift + Eject but I dont want this.
I want to leave my Mac on and connect to it via teamViewer from another place but when I do that the Mac's display turn's on. I do this with my PC in the office, I leave my PC on but I simple turn off the monitor by pushing the button on the bottom right corner :) this means my PC is still on but the monitor is off and it does not show anything on the screen. I do this sometime in case I have to connect to my PC from home, but without letting anyone know that my PC is on!
UPDATED:
Sorry for not being clear,
It is "iMac" OS X version 10.8.5
Can I do this ?
Thanks
A:
You didn't describe what kind of Mac it is, but I'll assume an iMac or MacBook of some flavor. If you had a Mac Mini or a Pro with a third party monitor then you probably wouldn't be asking this question.
You can't do exactly what you want with an iMac, but if you use "Apple Remote Desktop" (App Store, $80) you can lock the screen so anybody sitting at your computer can't see what you are doing. This doesn't seem to be possible with the built in screen sharing.
The Apple Remote Desktop option is more secure, since you can't be sure that nobody turned your PC monitor on after you leave work.
|
[
"stackoverflow",
"0046120897.txt"
] | Q:
ng-admin doesn't initialize in another ui-view
I have faced a problem that ng-admin is not rendering inside simple ui-view.
Here is the code:
$stateProvider
.state('home', {
url: '/',
template: homeTemplate
})
.state('login', {
url: "/login",
template: loginTemplate,
controller: 'LoginController',
controllerAs: 'vm'
})
.state('register', {
url: "/register",
template: "<div>Register is under maintenance</div>",
})
;
And here is Html:
<html ng-app="myApp">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>Admin-tool</title>
<meta name="google" value="notranslate">
<meta name="viewport" content="width=device-width, initial-scale=1">
<!--<link rel="shortcut icon" href="favicon.ico" />-->
</head>
<body ng-strict-di class="main-content">
<div id="content-body" class="content">
<div ui-view=""></div>
</div>
<pg-footer></pg-footer>
</body>
Also there is an info message
WARNING: Tried to load angular more than once.
Maybe this is the case? Could you please point me out why it doesn't even render ng-admin simple page?
In browser it just looks like:
<!-- uiView: -->
<div ui-view="" class="ng-scope"></div>
A:
Well, ng-admin has it's own routing so it was needed to make login rout and default on which ng-admin initialize. If it's needed to make custom pages inside ng-admin you just need to specify that in your state like this: parent: 'ng-admin'
|
[
"stackoverflow",
"0000781468.txt"
] | Q:
Hibernate DAO implementation
Can anyone suggest a DAO implementation for a web application?
What will be the problem if I create a transaction for fundamental operation (e.g. findByID(), findALL(), createObject(), deleteObject(), etc.)?
Please suggest a DAO implementation that supports lazy operations.
A:
If you use Hibernate Tools to generate your code the basic DAOs will be automatically generated for you. You can build upon them.
Anyway, some code snippet I use for transaction:
public void executeTransaction(Object[] parameters, Transact transact) throws ApplicationException
{
Transaction tx = null;
try
{
tx = HibernateSessionFactory.getSession().beginTransaction();
transact.execute(parameters, tx);
tx.commit();
LOG.trace("executeTransaction() success");
}
catch (Exception e)
{
rollback(tx);
throw new ApplicationException(e);
}
}
private void rollback(Transaction tx) throws ApplicationException
{
LOG.warn("rollback()");
if (tx != null)
{
try
{
tx.rollback();
}
catch (Exception ex)
{
LOG.error("rollback() failure",ex);
}
}
}
public interface Transact
{
public void execute(Object[] parameters, Transaction tx) throws Exception;
}
void updateDistrictImpl(final Distretto district) throws ApplicationException, ApplicationValidationException
{
try
{
LOG.trace("updateDistrict[" + distrettoToString(district) + "]");
executeTransaction(new Transact() {
public void execute(Object[] parameters, Transaction tx) throws ApplicationException
{
DistrettoHome DistrettoDAO = new DistrettoHome();
DistrettoDAO.attachDirty(district);
}
});
LOG.info("updateDistrict[" + distrettoToString(district) + "] success!");
}
catch (ApplicationException e)
{
LOG.error("updateDistrict() exception: " + e.getLocalizedMessage(), e);
throw e;
}
}
|
[
"stackoverflow",
"0042186533.txt"
] | Q:
Why passing a linked list pointer to a function from main() affects the linked list in main?
I'm writing a program in C. The program receives file path from standard input to a file which contains data. Then linked list is built from the data. The linked list has to be circular however for simplicity sake (for adding nodes, printing the list) I'm transforming the circular list into a regular non-circular linked list. This is done with uncirc function. In the end I assemble the list back to circular structure using circ function.
I'm passing a pointer to the linked list to the function printList which prints the contents of the list. However, after using uncirc from inside printList the list actually remains "uncirc"-ed even the main. As far as I know pointers are passed by value so doing anything with the list inside printList should not have affected the original list.
The code is below (I included only the essential function pertaining to the problem, otherwise the code would pretty large). I suspect some if you could say that I could easily print the list even in circular structure but would truly bothers me is that the original list is altered from a pointer.
#include <stdio.h>
#include <stdlib.h>
#define MAX_FILE_NAME_LEN 300
#define MAX_LINE_LEN 300
#define MATERIAL_LEN 100
#define FIELDS_IN_LIGHTING_NUM 8
enum l_type {
TABLE = 1, WALL, CEILING
};
typedef struct Lighting {
enum l_type type;
int length;
int width;
int height;
int bulbs;
char material[MATERIAL_LEN];
int strength;
struct Lighting * next;
} Lighting;
char * getFileName();
int getVolume(Lighting * light);
Lighting * uncirc(Lighting * light);
Lighting * circ(Lighting *light);
void addNode(Lighting **head, FILE *fd);
void printNode(Lighting * light);
void printList(Lighting * light);
int countLines(FILE *fd);
void printMaxLight(Lighting * light);
int main() {
FILE * fd;
char * path;
Lighting * n1 = NULL;
int linesInFile, lightNum, i;
path = getFileName();
if(!(fd = fopen(path, "r+"))) {
printf("Cannot open file %s\n", path);
fprintf(stderr, "Cannot open file %s\n", path);
exit(0);
}
linesInFile = countLines(fd);
lightNum = linesInFile / 7;
for(i = 0; !(feof(fd)) && i < lightNum; i++) {
addNode(&n1, fd); //read file data and create node
//7 lines of data are required to create node
}
fclose(fd);
printList(n1); //print the linked list
return 0;
}
Lighting * uncirc(Lighting * light) {
Lighting * p = light;
if(p == NULL) {
return p;
}
while(p -> next != light) {
p = p -> next;
}
p -> next = NULL;
return light;
}
Lighting * circ(Lighting *light) {
Lighting * p = light;
if(p == NULL) {
return p;
}
while(p -> next != NULL) {
p = p -> next;
}
p -> next = light;
return light;
}
void printList(Lighting * light) {
Lighting * p;
p = uncirc(light);
if(p == NULL) {
printf("Empty list\n");
return;
}
while(p != NULL) {
printNode(p);
p = p -> next;
}
}
A:
The list isn't contained in the pointer. Only the address of the first element is there. And you are not trying to modify what address the pointer in main is containing at any point.
If you pass the address of the first element, and then use it to traverse the list and modify the elements, naturally it will be visible when you use the same address for traversal again.
Side note
while(p -> next != light) {
If the list you pass isn't circular, this will be an infinite loop.
|
[
"ru.stackoverflow",
"0000194573.txt"
] | Q:
Литература по разработке приложений на мобильные устройства
Хочу научится разрабатывать приложения на моб.устройства желательно на Windows 8 и Android.
Я уже знаю HTML/CSS/JavaScript имею опыт в веб-разработке, хотел бы разрабатывать через эти языки. Порекомендуйте пожалуйста литературу, сайты и т.п
A:
знания только HTML/CSS/JavaScript явно недостаточно. Серверную часть вы на чем писать будете? На JavaScript? Серверный JS, насколько я знаю, это пока что только Node.js, но на нем приложения под мобильные устройства не пишутся (могу, конечно, ошибаться, поправьте, если не прав). Если вдруг есть желание писать под Android, то придется учить Java (впрочем, оно и к лучшему - программист, знающий только то, что вы указали в большинстве случаев полноценным программистом вряд может считаться). Если же вы вдруг (по неопытности, например) думаете, что раз знакомы с JavaScript, то и с Java быстро разберетесь, то спешу вас разочаровать - кроме схожести названий и (частично) синтаксиса общего у этих языков мало. Кроме того, под Android можно писать и на C# (пруф, тут же и документацию можно найти). Знание c# вам также поможет, если захотите разрабатывать под WinPhpne (или как он там нынче зовется)
Что касается книг, то вот, вот и еще вот. Это что касается Android. А вот по Windows 8, причем специально для вас - JavaScript и все такое
|
[
"emacs.stackexchange",
"0000014597.txt"
] | Q:
How to make a dot match a newline
Often I need to use regular expressions to match strings across multiple lines. In python and other languages, it's possible to ask that dots match newlines (as opposed to the default behavior). Is it possible to do this using the default regexp utilities without hacking the source?
I know I can use a negated character class, which will consume newlines if the newline is not in the class, but this is too limited since what I really want is to just match any character at all.
A:
While erjoalgo's answer is correct, according to the Emacs Wiki Mutliline Regex page, it is not the most efficient answer:
To match any number of characters, use this: .* – the problem is that . matches any character except newline. What many people propose next works, but is inefficient if you assume that newlines are not that common in your text: "\\(.\\|\n\\)". Better match multiple lines instead: "\\(.*\n?\\)*". The newline is optional so that the expression can end in the middle of a line. Better yet: "[\0-\377[:nonascii:]]*" avoids “Stack overflow in regexp matcher” for huge texts, e.g., > 34k.
A:
Actually, I just noticed I can do this with \(.\|[\n]\)*. For example,
[code] $ sudo wpa_supplicant -B -i wlan0 -Dwext -c universitywpa
Successfully initialized wpa_supplicant
ioctl[SIOCSIWENCODEEXT]: Invalid argument
ioctl[SIOCSIWENCODEEXT]: Invalid argument
[/code]
-
(progn
(re-search-forward "[[]code]\\(\\(.\\|[\n]\\)*?\\)[[]/code]" )
(match-string 1))
gives
" $ sudo wpa_supplicant -B -i wlan0 -Dwext -c universitywpa
Successfully initialized wpa_supplicant
ioctl[SIOCSIWENCODEEXT]: Invalid argument
ioctl[SIOCSIWENCODEEXT]: Invalid argument
"
A shorthand for this would be nicer, though, something like
(let ((re-dot-match-all t))
(re-search-forward "[[]code]\\(.*?\\)[[]/code]" )
(match-string 1))
Similar to the case-fold-search use. Or a named character class:
(re-search-forward "[[]code]\\([[:any:]]*?\\)[[]/code]" )
|
[
"stackoverflow",
"0009190259.txt"
] | Q:
AS3: Is it possible to generate animated sprite sheets at runtime from vector?
I would like to use Bitmaps in my Actionscript games.
For me this represents a large change in my workflow as I have always used Vector but Bitmaps are really so much faster to render in certain circumstances. As far as I can see, 90% of all my game assets can be bitmaps.
Firstly, are there any good tools for working with Vector to BitmapData? Libraries or OpenSource utilities?
I know you can just draw to a BitmapData, and I do that, but what about Animations? What about a MovieClip of a laughing cow? How can I render that MovieClip at runtime to some kind of Bitmap version?
But more complex than that... What about situations where you do not have the MovieClip in a raw form?
Imagine 10000 cogs turning at the same rate which is generated with code. This is hard work for the processor, so drawing it to a Bitmap for the duration of 1 revolution, would replace 10000 cogs with a SpriteSheet. I could destroy the cogs, and keep the SpriteSheet.
Can anyone offer me any resources or google keywords I can search for, not sure of the technique but it seems to make sense? Especially with Starling..... My Vectors are going to have to become SpriteSheets at some point.
Thanks.
A:
The basic process of converting a movie clip to a sprite sheet is this:
Choose a movieclip.
Get the bounds of the movie clip. You need to get the width and height of the widest and tallest frame of animation.
get the number of frames of the movie clip.
Make a new bitmapdata object that as wide as the # of frames times the width of a frame. And as high as one frame.
5 loop through each frame of the clip, and call bitmapData.draw() on each frame. Be sure to offset the matrix of the draw command on each frame by the width of one sprite frame.
The end result will be a single bitmapdata object with each frame rendered to it.
From there you can follow this tutorial on blitting.
http://www.8bitrocket.com/2008/07/02/tutorial-as3-the-basics-of-tile-sheet-animation-or-blitting/
|
[
"tex.stackexchange",
"0000124568.txt"
] | Q:
Suppressing the retrieval date in biblatex-apa
I was wondering why biblatex-apa was not suppressing the urldate from being printed. So I looked around and I found the question Odd date output from biblatex-apa, but found out that it was more about the format of the date.
The American Psychological Association style manual states that
"Do not include retrieval dates unless the source material may change over time (e.g., Wikis)" (American Psychological Association, 2010, p. 192)
Question: How do I suppress the retrieval date from being printed (unless perhaps I am citing a wiki)?
bibliography.bib:
@article{myers,
author = {Scott W. Myers and Michael Ballweg and John L. Wedberg},
title = {Assessing the impact of {European} corn borer on corn grown for silage},
url = {http://www.uwex.edu/ces/crops/uwforage/ECB.htm},
urldate = {2013-07-09},
journaltitle = {Focus on Forage},
volume = {3},
issue = {4},
organization = {University of Wisconsin Wisconsin Team Forage},
keywords = {jared},
}
sample.tex:
\documentclass[preview,border=5]{standalone}
\usepackage[american]{babel}
\usepackage{csquotes}
\usepackage[style=apa, backend=biber]{biblatex}
\DeclareLanguageMapping{american}{american-apa}
\addbibresource{bibliography.bib}
\begin{document}
\nocite{*}
\printbibliography
\section*{Desired output:}
\begin{minipage}{\linewidth}
\hangindent=24pt
Myers, S. W., Ballweg, M., \& Wedberg, J. L. (n.d.).
Assessing the impact of European corn borer on corn grown for silage.
\emph{Focus on Forage, 3}.
Retrieved from \url{http://www.uwex.edu/ces/crops/uwforage/ECB.htm}
\end{minipage}
\end{document}
I know that I can manually delete urldate from my entry in the bib file but I would rather not do that as I use the same file when working with other citation styles.
A:
Add this to your preamble to remove urldate fields from entries with url fields which don't look like wikis. Of course you can tweak the regexp to your requirements:
\DeclareSourcemap{
\maps[datatype=bibtex]{
\map{
\step[fieldsource=url,
notmatch=\regexp{wiki},
final=1]
\step[fieldset=urldate, null]
}
}
}
This removes urldate from the data stream which biblatex sees dynamically and so you don't need to touch your .bib.
|
[
"stackoverflow",
"0053763815.txt"
] | Q:
Groupby to create new columns
From a dataframe, I want to create a dataframe with new columns if the index is already found BUT I don't know how many columns I will create :
pd.DataFrame([["John","guitar"],["Michael","football"],["Andrew","running"],["John","dancing"],["Andrew","cars"]])
and I want :
pd.DataFrame([["John","guitar","dancing"],["Michael","Football",None],["Andrew","running","cars"]])
without knowing how many columns I should create at the start.
A:
df = pd.DataFrame([["John","guitar"],["Michael","football"],["Andrew","running"],["John","dancing"],["Andrew","cars"]], columns = ['person','hobby'])
You can groupby person and search for unique in hobby. Then use .apply(pd.Series) to expand lists into columns:
df.groupby('person').hobby.unique().apply(pd.Series).reset_index()
person 0 1
0 Andrew running cars
1 John guitar dancing
2 Michael football NaN
In the case of having a large dataframe, try the more efficient alternative:
df = df.groupby('person').hobby.unique()
df = pd.DataFrame(df.values.tolist(), index=df.index).reset_index()
Which in essence does the same, but avoids looping over rows when applying pd.Series.
A:
Use GroupBy.cumcount for get counter and then reshape by unstack:
df1 = pd.DataFrame([["John","guitar"],
["Michael","football"],
["Andrew","running"],
["John","dancing"],
["Andrew","cars"]], columns=['a','b'])
a b
0 John guitar
1 Michael football
2 Andrew running
3 John dancing
4 Andrew cars
df = (df1.set_index(['a', df1.groupby('a').cumcount()])['b']
.unstack()
.rename_axis(-1)
.reset_index()
.rename(columns=lambda x: x+1))
print (df)
0 1 2
0 Andrew running cars
1 John guitar dancing
2 Michael football NaN
Or aggregate list and create new dictionary by constructor:
s = df1.groupby('a')['b'].agg(list)
df = pd.DataFrame(s.values.tolist(), index=s.index).reset_index()
print (df)
a 0 1
0 Andrew running cars
1 John guitar dancing
2 Michael football None
|
[
"stackoverflow",
"0033055682.txt"
] | Q:
Loading up separate html files before they are requested with js
So I have this as my source code and all is well, apart from when I try to load up the pages with a slow internet connection. What happens is that the current page gets faded out and back in, and after the content from the external html file is loaded it just pops in.
I'm trying to tackle this with just loading everything once the page is loaded initially, how would that work?
Main JS script link - here
A:
I'm posting this as a separate answer as it focuses on your current approach.
Instead of using .load(), use .get() so it isn't replacing the content of your div right away. Then .fadeOut() the div, replace the HTML, and .fadeIn() upon success.
$.get("news.html", function(data) {
$("#content").fadeOut(function() {
$(this).html(data);
}).fadeIn();
});
I was only able to test this with a slow connection simulator (Network Link Conditioner for Mac OS X), but it ran smoothly for my tests.
|
[
"drupal.stackexchange",
"0000151326.txt"
] | Q:
How to display results tagged with just one term ID when there are multiple filters
I have a blog in Drupal 7 and used Views to display the fields. I have 2 sets of tags displayed on the sidebar:
1) SHAPES
Triangle
Square
Rectangle
2) COLOR
Red
Blue
Green
I created the 2 groups of tags via Structure > Taxonomy > Vocabulary > Add Term
For the View, I created one View with two exposed filters (similar to what was done in this tutorial)
VIEW:
Path: '/blog/tag'
Filter Criteria:
1) 'Content: Shape (exposed)'
Exposed form in Block: 'Yes'
Filter identifier: 'shape_id'
2) 'Content: Color (exposed)'
Exposed form in Block: 'Yes'
Filter identifier: 'color_id'
Exposed form style: BEF
The exposed filters are set to the appropriate block region in Structure > Blocks.
If I click 'triangle', only blog posts tagged with 'triangle' should be displayed. Then if I click 'red', only posts tagged with 'red' should be displayed.
It looks like the correct blog posts are being displayed when I click on the terms. The URL gets appended with IDs from both vocabularies, but I can't seem to get the right combos to display.
Desired URL when 'triangle' is clicked: '/blog/tag/?shape_id=1&color_id=All'
Desired URL when 'red' is clicked: '/blog/tag/?shape_id=All&color_id=1'
So, essentially the results should display items tagged with just one term ID.
However, how it is currently working is that if I click 'triangle', and then 'red', both 'triangle' and 'red' have the 'selected' class (are both bolded) and the url is: '/blog/tag/?shape_id=1&color_id=1'
The only way I was able to get the desired URLs is if I have the '-Any-' option displayed. However, the use case I have is to NOT have '-Any-' listed in the list of tags.
Is this even possible without '-Any-' listed in the list of terms? I have never used multiple exposed filters before, so any guidance would be great as I've researched this for almost a week now. I read in another post that contextual filters would allow me to create separate path aliases that are clean urls, but I have not been successfully able to do this.
Any help would be greatly appreciated. Thank you for your time.
A:
When I tried using the taxonomy term pages approach, I could not figure out how to get the fields displaying properly. The fields displayed were based on the teaser in my Blog content type. And using exposed filter option was not a solution since my use case was to only display results based on one tag. What worked for me:
1) Add new view to existing Blog View
2) FORMAT: 'Unformatted list'
SHOW: 'Fields'
3) FIELDS: [add fields that you need displayed]
4) FILTER CRITERIA: 'Content: Published (Yes)'; 'Content: Type (= Blog)'
5) PATH: '/blog/[YOUR VOCABULARY NAME]/%'
6) CONTEXTUAL FILTERS: 'Content: Has Taxonomy Term ID'
WHEN THE FILTER VALUE IS NOT IN THE URL: 'Display all results for the specified field'
WHEN THE FILTER VALUE IS IN THE URL OR A DEFAULT IS PROVIDED: select 'Override title' and in the input field, enter: '[YOUR VOCABULARY NAME]/%1'
select 'Specify validation criteria'
Validator drop-down, select 'Taxonomy Term'
Vocabularies: select appropriate Vocabulary name
Filter value type drop-down: 'Term name converted to term ID'
select 'Transform dashes in URL to spaces in term name filter values'
Action to take if filter value does not validate: select appropriate one that works for your needs (in my case: 'Display contents of "No Results found"')
If you need to display the term name in the heading on the results page, in the same view > HEADER > Add 'Global: Unfiltered Text' > then in text area, add something like "VIEWING RESULTS FOR [YOUR VOCABULARY NAME] / %1"
Save the View. Since I had multiple filters, I cloned the above view and renamed everything. So, I ended up adding 3 additional views to my existing Blog View.
I had previously set URL aliases for Taxonomy Term Paths, so I removed these
Also, I had to go back to my Taxonomy Terms individually and delete the URL alias that was generated from the pattern
To have the list of tags displaying on the right with the desired URL structure of /blog/[YOUR VOCABULARY NAME]/[YOUR TERM NAME], I created a separate new view:
SHOW: 'Taxonomy Terms' of type '[YOUR VOCABULARY NAME]'
Create a Block
'Unformatted list' of 'Fields'
FIELDS: 'Taxonomy: Term Name' > click this and unselect 'Link this field to its taxonomy term page'. Then expand "Rewrite Results" section and select 'Output this field as a link' > in the Link path: 'blog/[YOUR VOCABULARY NAME]/[name]' (the [name] at the end of the url structure should appear as you see it with the square brackets) > select 'Replace spaces with dashes' > Transform the case to 'lower case' > click Apply
FILTER CRITERIA: 'Taxonomy vocabulary: Machine Name (= [YOUR VOCABULARY NAME])'
Save the View
Then in Structure > Blocks, place the block in the appropriate region
Hope this helps someone!
|
[
"ethereum.stackexchange",
"0000023204.txt"
] | Q:
How can I tell from raw transaction data if it succeeded or failed?
I am getting into a bit of chain analysis and am using web3.eth.getTransaction('0x8ab08c56c46ca42091ec44c7c9148fe5eb6e0355eeffb29acb5f6c3326139f9e') to get the following transaction details:
{ blockHash: '0x2e5521db5cb9ea805cf50d979684969656fc3ecda22db58f16881af9d15da083',
blockNumber: 48729,
from: '0x3d0768da09ce77d25e2d998e6a7b6ed4b9116c2d',
gas: 115510,
gasPrice: { [String: '55866980572'] s: 1, e: 10, c: [ 55866980572 ] },
hash: '0x8ab08c56c46ca42091ec44c7c9148fe5eb6e0355eeffb29acb5f6c3326139f9e',
input: '0x60606040526040516102b43803806102b48339016040526060805160600190602001505b5b33600060006101000a81548173ffffffffffffffffffffffffffffffffffffffff021916908302179055505b806001600050908051906020019082805482825590600052602060002090601f01602090048101928215609e579182015b82811115609d5782518260005055916020019190600101906081565b5b50905060c5919060a9565b8082111560c1576000818150600090555060010160a9565b5090565b50505b506101dc806100d86000396000f30060606040526000357c01000000000000000000000000000000000000000000000000000000009004806341c0e1b514610044578063cfae32171461005157610042565b005b61004f6004506100ca565b005b61005c60045061015e565b60405180806020018281038252838181518152602001915080519060200190808383829060006004602084601f0104600302600f01f150905090810190601f1680156100bc5780820380516001836020036101000a031916815260200191505b509250505060405180910390f35b600060009054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff163373ffffffffffffffffffffffffffffffffffffffff16141561015b57600060009054906101000a900473ffffffffffffffffffffffffffffffffffffffff1673ffffffffffffffffffffffffffffffffffffffff16ff5b5b565b60206040519081016040528060008152602001506001600050805480601f016020809104026020016040519081016040528092919081815260200182805480156101cd57820191906000526020600020905b8154815290600101906020018083116101b057829003601f168201915b505050505090506101d9565b90560000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000000000000000000000000000000c48656c6c6f20576f726c64210000000000000000000000000000000000000000',
nonce: 8,
to: null,
transactionIndex: 0,
value: { [String: '0'] s: 1, e: 0, c: [ 0 ] },
v: '0x1b',
r: '0x323e3ddd61c16d168fc73e1935e584b79e114a782fd424697abae985ee5d5906',
s: '0x7f0b4c986006eebb87342b04e6b147778ca678393d520ced5f336c1ab081bab9' }
Etherscan tells me that this transaction failed. How can I deduce that information from the above data?
UPDATE: Using web3.eth.getTransactionReceipt('0x8ab08c56c46ca42091ec44c7c9148fe5eb6e0355eeffb29acb5f6c3326139f9e') does not yield much more info either:
{ blockHash: '0x2e5521db5cb9ea805cf50d979684969656fc3ecda22db58f16881af9d15da083',
blockNumber: 48729,
contractAddress: '0xf914866d52b690553c0aacece3b38cc8b463ea50',
cumulativeGasUsed: 115510,
from: '0x3d0768da09ce77d25e2d998e6a7b6ed4b9116c2d',
gasUsed: 115510,
logs: [],
logsBloom: '0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000',
root: '0xab42de6a98857259b73d8a1081ed068942cbff6e6b673a931ce97df3dc3350ee',
to: null,
transactionHash: '0x8ab08c56c46ca42091ec44c7c9148fe5eb6e0355eeffb29acb5f6c3326139f9e',
transactionIndex: 0 }
A:
One way is to check if gas sent == gas used. If so, you can be reasonably certain the transaction aborted. Gas sent is in the getTransaction info: 115510; gas used is in the getTransactionReceipt info, also 115510. See this comprehensive write-up.
To access further info I think you'll need to enable transaction tracing on your node and use the node-specific tools to explore the trace logs. See Parity's trace module for example. This is able to return success status of all transactions etc. I think Geth's debug management API can do similar.
|
[
"codereview.stackexchange",
"0000069969.txt"
] | Q:
Normal distribution array function
I have a function that produce an array with values that are Gaussian (normal) distribution. If you plot the values on a graph, it would look something like this:
Is there any way to improve on the code?
private static IComparable[] NonUniformDistributionsGaussian(int startNumber,int arraySize)
{
IComparable [] arr = new IComparable[arraySize];
Double[] intervals = { ((1.00 / 6.00) * arraySize), ((2.00 / 6.00) * arraySize),
((3.00 / 6.00) * arraySize),((4.00 / 6.00) * arraySize),
((5.00 / 6.00) * arraySize), ((6.00 / 6.00) * arraySize) };
for (var i = 0; i < arraySize; i++)
{
if (i <= (int)intervals[0])
{
startNumber = startNumber + 1;
}
else if ( i <= (int)intervals[1])
{
startNumber = startNumber + 2;
}
else if (i <= (int)intervals[2])
{
startNumber = startNumber + 3;
}
else if (i <= (int)intervals[3])
{
startNumber = startNumber - 3;
}
else if (i <= (int)intervals[4])
{
startNumber = startNumber - 3;
}
else
{
startNumber = startNumber - 2;
}
arr[i] = startNumber;
}
return arr;
}
A:
As the conditions of the if statements only depending on some fixed numbers and the arraysize input parameter these values can be precalculated.
int oneThirdArraySize = (int)((1.00 / 3.00) * arraySize);
and then used in the loop like
for (var i = 0; i < arraySize; i++)
{
if (i <= oneThirdArraySize)
{
startNumber = startNumber + 1;
arr[i] = startNumber;
}
// and so on
}
By precalculation of the right side values of the if conditions outside of the loop you can speed this up because right now you do these calculation up to two times for every iteration.
But we can do better because you are using some magic numbers here which we can hide behind some meaningful const variables.
private static const double oneThird = 1d / 3d;
private static const double twoThird = 2d / 3d;
In this way the calculations need to be changed to
int oneThirdArraySize = (int)(oneThird * arraySize);
A:
I don't understand the logic or how it generates a Gaussian distribution.
Assuming the logic itself is correct, you could simplify the code.
Key points:
assignment statement identical in all if branches, so pull it out of the conditional logic (DRY principle)
No reason to do floating-point arithmetic - array indexes have to be integers so it's not clear what you're accomplishing
use if/else to avoid repeating the inverse of the first if branch (DRY principle again)
since there are only 3 mutually exclusive possibilities of the array index (first third, second third, final third) you don't even need to specify the final condition explicitly.
Improved version:
private static IComparable[] NonUniformDistributionsGaussian(int startNumber, int arraySize)
{
IComparable [] arr = new IComparable[arraySize];
for (var i = 0; i < arraySize; i++)
{
if (i <= arraySize / 3)
{
startNumber = startNumber + 1;
}
else if (i <= 2*arraySize/3)
{
startNumber = startNumber + 2;
}
else
{
startNumber = startNumber - 2;
}
arr[i] = startNumber;
}
return arr;
}
|
[
"stackoverflow",
"0018562935.txt"
] | Q:
How to get a rails form that is submitted with invalid data to repopulate the user entered fields
I'm working to build a rails 3 + devise, user registration page. This will be an additional page that does not replace the existing devise registration page. This page will include user info and billing info.
I'm trying to get the form to submit and if the form fields do not save, have the reloaded page include the user's previously inputted data. Here'a snippet:
<%= form_for(User.new, :url => '/pricing/sign_up') do |f| %>
<%= f.label :email %>
<%= f.email_field :email %>
<% end %>
When the form submits with invalid data. When the view re-renders, the existing email entered is not persisted. How can I make the existing user's input persist to help the user quickly correct mistakes and submit a valid form?
Thanks
A:
The key is to have the form_for use the right object. So, instead of
<%= form_for(User.new, :url => '/pricing/sign_up') do |f| %>
you should be using an instance variable to contain the object, like this
<%= form_for(@user, :url => '/pricing/sign_up') do |f| %>
The controller actions would look like this:
# Note: this may need to be an `edit` method instead?
def new
@user = User.new
end
# Note: this may need to be an `update` method instead?
def create
@user = User.new(params[:user])
if @user.save
# Do something... Usually a redirect with success message.
else
render :new
end
end
What this create method is doing is it's filling the @user object with params from the form. And then the call to @user.save will, behind the scenes, call @user.valid? and, if no errors are returned, then the record is saved to the database. But this part is key. If @user.valid? does result in errors, then the errors collection on @user will be populated. Then, after the render :new completes, and re-renders your user form, the form will be able to spit out errors messages by accessing the @user.errors collection. Otherwise, the way you had it before, you always had a User.new object in the form which would never have had any errors because it was never used to attempt record validation before.
How to display the errors in your form is a matter of preference and a little beyond the scope of this question. Here's a guide: http://guides.rubyonrails.org/active_record_validations.html#displaying-validation-errors-in-views
A:
I think it's because of your form_for declaration where you're creating new instance of User on every call.
If you move the User.new to your controller and render the new action upon failure in create action then you should see the user entered values in the form fields.
Something like the following should work:
# app/controllers/users_controller.rb
def new
@user = User.new
end
def create
@user = User.new(params[:user])
respond_to do |format|
if @user.save
...
else
format.html { render :new }
end
end
end
Then in your view:
<%= form_for(@user, :url => '/pricing/sign_up') do |f| %>
<%= f.label :email %>
<%= f.email_field :email %>
<% end %>
|
[
"stackoverflow",
"0047892026.txt"
] | Q:
Build failed with bytecode error in Android Studio
I have this error:
I have enabled multidex but still it gives me multidex files define issues.
compile fileTree(include: ['*.jar'], dir: 'libs')
// Navigation Drawer Library
compile('com.mikepenz:materialdrawer:5.3.0@aar') {
transitive = true
}
// Debugger Tools libraries
debugCompile 'com.facebook.stetho:stetho:1.2.0'
debugCompile 'com.parse:parseinterceptors:0.0.2'
//Google, Inc (Play services) Libraries
compile 'com.google.android.gms:play-services-maps:9.2.0'
// Google, Inc (Support) Libraries
compile 'com.android.support:support-v13:25.1.0'
compile 'com.android.support:support-v4:25.1.0'
compile 'com.android.support:design:25.1.0'
compile 'com.android.support:appcompat-v7:25.1.0'
compile 'com.android.support:multidex:1.0.1'
compile 'org.florescu.android.rangeseekbar:rangeseekbar-library:0.3.0'
// Parse Server API SDK
compile project(':ParseUI-Widget')
compile 'com.parse:parse-android:1.13.1'
compile 'com.parse.bolts:bolts-android:1.4.0'
compile 'com.github.tgio:parse-livequery:1.0.3'
compile 'com.parse:parsefacebookutils-v4-android:1.10.3@aar'
// Facebook, Inc SDKs
compile 'com.facebook.android:audience-network-sdk:4.18.0'
compile 'com.facebook.android:facebook-android-sdk:4.7.0'
compile 'com.facebook.android:account-kit-sdk:4.+'
// Libraries for loading images
compile 'com.facebook.fresco:imagepipeline-okhttp3:0.11.0+'
compile 'com.facebook.fresco:fresco:1.0.0'
// Location Helper Library
compile 'com.squareup.retrofit2:retrofit:2.1.0'
compile 'com.squareup.retrofit:converter-gson:2.0.0-beta1'
// Time library
compile 'joda-time:joda-time:2.9.7'
// Accelaration library
//compile 'com.neumob:neumob-android:3.2.4'
// Others
compile 'com.skyfishjy.ripplebackground:library:1.0.1'
compile 'com.jakewharton:butterknife:8.8.1'
annotationProcessor 'com.jakewharton:butterknife-compiler:8.8.1'
compile 'com.android.support:cardview-v7:25.3.1'
compile 'com.squareup.picasso:picasso:2.5.2'
compile files('libs/mint-4.4.0.jar')
I have added build.gradle file.
I don't know how to solve Multidex issue in Android. My app has lots of SDKs integration, I went through lots of tutorials and blogs. I got so many solutions. One of them is below mentioned as part of my gradle.
A:
apply plugin: 'com.google.gms.google-services'
classpath 'com.google.gms:google-services:3.1.1'
|
[
"rus.stackexchange",
"0000048927.txt"
] | Q:
Может ли союз "или" соответствовать по значению союзам "и", "а также"?
Может ли союз "или", когда употребляется в предложении при присоединении последнего члена перечисления, соответствовать по значению союзам: "и", "а также"?
A:
Может ли союз "или", когда
употребляется в предложении при
присоединении последнего члена
перечисления, соответствовать по
значению союзам: "и", "а также"?
Может. Из "Объяснительного словаря русского языка" под. ред. В.В. Морковкина:
...Употр. для присоединения к ряду
слов, обозначающих однородные в
определённом отношении предметы,
действия, события и т.п., ещё одного
аналогичного по значению слова, к-рое,
завершая перечисление, имеет характер
некоторого добавления к уже
перечисленному.
Люди сидели на стульях, в креслах или просто на полу.
|
[
"stackoverflow",
"0020220615.txt"
] | Q:
XSLT 2.0: Does value exist in list of elements?
I have an XML document as follows:
<Document>
<Countries>
<Country>Scotland</Country>
<Country>England</Country>
<Country>Wales</Country>
<Country>Northern Ireland</Country>
</Countries>
<Populations>
<Population country="Scotland" value="5" />
<Population country="England" value="53" />
<Population country="Wales" value="3" />
<Population country="Northern Ireland" value="2" />
<Population country="France" value="65" />
<Population country="" value="35" />
</Populations>
</Document>
I am attempting to write an XSLT statement which will access all Population elements where its "country" attribute is blank OR its "country" attribute is NOT in /Document/Countries/Country
My XSLT looks as follows:
<xsl:variable name="countries" select="/Document/Countries/Country" />
<xsl:variable name="populations" select="/Document/Populations/Population" />
<xsl:variable name="populationsNotInList" select="$populations[(@country = '') OR (@country NOT IN $countries)" />
Can you help me fill in the 'NOT IN $countries' part of the $populationsNotInList variable?
I am essentially looking for an output of:
<Population country="France" value="65" />
<Population country="" value="35" />
A:
Use <xsl:variable name="populationsNotInList" select="$populations[(@country = '') or not(@country = $countries)" /> or better define a key
<xsl:key name="country" match="Countries/Country" use="."/>
then you can do <xsl:variable name="populationsNotInList" select="$populations[(@country = '') or not(key('country', @country))" />
|
[
"math.stackexchange",
"0002129318.txt"
] | Q:
Maximize $c^t x + x^T A x$ subject to $x^T x = 1$ where $A \succeq 0$
The vector Bingham-von Mises-Fisher distribution is defined on on the sphere $S^{p-1}$ and has density
$$p(x \vert c, A) \propto \text{exp}\{c^Tx + x^TAx\}$$ with respect to the uniform measure on $S^{p-1}.$ Assume $c\in\mathbb{R}^p\setminus \{0\}$ and $A \succeq 0.$ The modal set of the distribution is the set of solutions to the optimization problem in the title. How can I characterize this set and under what conditions does it consist of a single point?
A:
We have a non-convex quadratically constrained quadratic program (QCQP). Let $\mathrm b := -\frac 12 \mathrm c$ and
$$\mathcal L (\mathrm x, \lambda) := \mathrm x^{\top} \mathrm A \, \mathrm x - 2 \mathrm b^{\top} \mathrm x - \lambda (\mathrm x^{\top} \mathrm x - 1)$$
be the Lagrangian. Taking the partial derivatives and finding where they vanish, we obtain
$$\left( \mathrm A - \lambda \mathrm I \right) \mathrm x = \mathrm b \qquad\qquad\qquad \| \mathrm x \|_2 = 1$$
We have two cases to consider. If
$\lambda$ is an eigenvalue of $\mathrm A$, then the linear system $\left( \mathrm A - \lambda \mathrm I \right) \mathrm x = \mathrm b$ has either no solution at all or infinitely many solutions. In the latter case, we intersect the affine solution space with the unit Euclidean sphere to find solutions of the QCQP.
$\lambda$ is not an eigenvalue of $\mathrm A$, then the linear system $\left( \mathrm A - \lambda \mathrm I \right) \mathrm x = \mathrm b$ has a unique solution. If this unique solution has unit Euclidean norm, we have found the unique solution of the QCQP.
|
[
"stackoverflow",
"0062541937.txt"
] | Q:
Different implementations of begin() and end() of a container
I am practicing implementing containers. My goal is to define the iterators begin() and end()
so that I can have loops in the form of for(auto x : v). My container looks like this:
class Vector{
public:
Vector(initializer_list<double> numbers){
sz = numbers.size();
elem = new double[sz];
int i = 0;
for (auto it = numbers.begin(); it!=numbers.end(); ++it)
elem[i++] = *it;
}
~Vector(){delete [] elem;}
double* begin();
double* end();
private:
double* elem;
int sz;
};
Option 1
This is how I have defined the iterators (and they work perfectly fine in my test cases)
double* Vector::begin(){
return elem;
}
double* Vector::end(){
return &elem[sz];
}
Option 2
This is how they are defined in A Tour of C++
double* Vector::begin(){
return &elem[0];
}
double* Vector::end(){
return &elem[0]+sz;
}
My question
As far as I can see both options work fine (assuming the container is non-empty). Does Option 2 have any advantages compared to Option 1 (and vice versa)? I appreciate any suggestions.
A:
While &elem[sz] and &elem[0]+sz will wind up giving you the same result on most/all systems, the first is actually undefined behavior. When you do
&elem[sz]
you are actually doing
&*(elem +sz)
and that *, the dereference, is to an element that doesn't exist. That is undefined behavior per the C++ standard.
With
&elem[0]+sz
you get a pointer to the first element which is legal provided the pointer points to an actual array, and then you advance it to be one past the end. This is a legal and a correct way to get the end iterator provided elem is not null and points to a valid array.
Another way to do this is to just use
return elem + sz;
as it doesn't require any dereferencing.
A:
Both of these options work, and I would be astonished if the compiler didn't generate equivalent code for each version.
I actually prefer your implementation of begin, since if elem is a pointer, then &elem[0] is redundant compared to elem.
Another option: for end, you could do something like
return begin() + size();
assuming that you have a size() member function. This doesn't require any addresses to be taken and more directly says "the position that's size() steps down from where begin() points." But that's just my opinion. :-)
Hope this helps!
|
[
"stackoverflow",
"0062780399.txt"
] | Q:
Trigger cross-project-pipelines for tags and manual jobs
Gitlab's cross-project-pipeline allows me to specify a branch to run a pipeline for. I didn't find any such option to do the same with a tag?
Since my cross-project-pipeline is also being run deliberately, is it also possible to run all manual jobs in a downstream pipeline?
A:
This should be possible with Triggering pipelines through the API.
You just need to add the following to your CI script:
- curl --request POST --form "token=$CI_JOB_TOKEN" --form ref=master https://gitlab.example.com/api/v4/projects/<project-id>/trigger/pipeline
and update the ref to be the tag you require, and <project-id> with the project you are triggering.
In terms of the cross-project pipeline being run deliberately and having manual jobs you want to run when being triggered, you'd probably need to re-write the downstream CI file to allow for that, eg:
Upstream CI file:
build:
stage: build
script:
- echo "Do some building..."
# Trigger downstream project (tag v1.0.0), with a random variable
- curl --request POST --form "token=$CI_JOB_TOKEN" \
--form "variables[RANDOM_VARIABLE]=FOO" \
--form ref=v1.0.0 https://gitlab.com/api/v4/projects/<project_id>/trigger/pipeline
Downstream CI file:
.test:template:
stage: test
script:
- echo "Running some test"
test:manual:
extends:
- .test:template
when: manual
except:
- triggers
test:triggered:
extends:
- .test:template
only:
- triggers
So when a triggered job is run, the test:triggered should be the only test job you see in the pipeline.
See only/except documents for more information.
|
[
"stackoverflow",
"0032282404.txt"
] | Q:
Lost support in Xamarin for GCM.Client with Android v5
I've been working on a Xamari project, however, after I started using Android v5, many packages started showing (including GCM.Client) "Assembly not found for framework Xamarin.Android v5 Support"
Changing the Android Target Version back did not solve the issue.
A:
It turns out that updating the packages was the answer, but before that I had to manually delete the Components which the packages referenced and then reinstall them, that did the trick for me.
|
[
"stackoverflow",
"0043722519.txt"
] | Q:
How to separately label and scale double y-axis in ggplot2?
I have a test dataset like this:
df_test <- data.frame(
proj_manager = c('Emma','Emma','Emma','Emma','Emma','Alice','Alice'),
proj_ID = c(1, 2, 3, 4, 5, 6, 7),
stage = c('B','B','B','A','C','A','C'),
value = c(15,15,20,20,20,70,5)
)
Preparation for viz:
input <- select(df_test, proj_manager, proj_ID, stage, value) %>%
filter(proj_manager=='Emma') %>%
do({
proj_value_by_manager = sum(distinct(., proj_ID, value)$value);
mutate(., proj_value_by_manager = proj_value_by_manager)
}) %>%
group_by(stage) %>%
do({
sum_value_byStage = sum(distinct(.,proj_ID,value)$value);
mutate(.,sum_value_byStage= sum_value_byStage)
}) %>%
mutate(count_proj = length(unique(proj_ID)))
commapos <- function(x, ...) {
format(abs(x), big.mark = ",", trim = TRUE,
scientific = FALSE, ...) }
Visualization:
ggplot (input, aes(x=stage, y = count_proj)) +
geom_bar(stat = 'identity')+
geom_bar(aes(y=-proj_value_by_manager),
stat = "identity", fill = "Blue") +
scale_y_continuous(labels = commapos)+
coord_flip() +
ylab('') +
geom_text(aes(label= sum_value_byStage), hjust = 5) +
geom_text(aes(label= count_proj), hjust = -1) +
labs(title = "Emma: 4 projects| $90M Values \n \n Commitment|Projects") +
theme(plot.title = element_text(hjust = 0.5)) +
geom_hline(yintercept = 0, linetype =1)
My questions are:
Why is the y-values not showing up right? e.g. C is labeled 20, but nearing hitting 100 on the scale.
How to adjust the position of labels so that it sits on the top of its bar?
How to re-scale the y axis so that both the very short bar of 'count of project' and long bar of 'Project value' can be well displayed?
Thank you all for the help!
A:
I think your issues are coming from the fact that:
(1) Your dataset has duplicated values. This causes geom_bar to add all of them together. For example there are 3 obs for B where proj_value_by_manager = 90 which is why the blue bar extends to 270 for that group (they all get added).
(2) in your second geom_bar you use y = -proj_value_by_manager but in the geom_text to label this you use sum_value_byStage. That's why the blue bar for A is extending to 90 (since proj_value_by_manager is 90) but the label reads 20.
To get you what I believe the chart you want is you could do:
#Q1: No dupe dataset so it doesnt erroneous add columns
input2 <- input[!duplicated(input[,-c(2,4)]),]
ggplot (input2, aes(x=stage, y = count_proj)) +
geom_bar(stat = 'identity')+
geom_bar(aes(y=-sum_value_byStage), #Q1: changed so this y-value matches your label
stat = "identity", fill = "Blue") +
scale_y_continuous(labels = commapos)+
coord_flip() +
ylab('') +
geom_text(aes(label= sum_value_byStage, y = -sum_value_byStage), hjust = 1) + #Q2: Added in y-value for label and hjust so it will be on top
geom_text(aes(label= count_proj), hjust = -1) +
labs(title = "Emma: 4 projects| $90M Values \n \n Commitment|Projects") +
theme(plot.title = element_text(hjust = 0.5)) +
geom_hline(yintercept = 0, linetype =1)
For your last question, there is no good way to display both of these. One option would be to rescale the small data and still label it with a 1 or 3. However, I didn't do this because once you scale down the blue bars the other bars look OK to me.
|
[
"stackoverflow",
"0028627585.txt"
] | Q:
new Thread, application still running after Stage-close
So I followed this tutorial: https://www.youtube.com/watch?v=gyyj57O0FVI
and I made exactly the same code in javafx8.
public class CountdownController implements Initializable{
@FXML
private Label labTime;
@Override
public void initialize(URL location, ResourceBundle resources) {
new Thread(){
public void run(){
while(true){
Calendar calendar = new GregorianCalendar();
int hour = calendar.get(Calendar.HOUR);
int minute = calendar.get(Calendar.MINUTE);
int second = calendar.get(Calendar.SECOND);
String time = hour + ":" + minute + ":" + second;
labTime.setText(time);
}
}
}.start();
}
After I close the Window, application/thread is still running in the system. My guess its because the infinite loop, but shouldnt the thread be terminated with application closing?
Second thing is that when I try to set the text for Label I get the error:
Exception in thread "Thread-4" java.lang.IllegalStateException: Not on FX application thread; currentThread = Thread-4
at com.sun.javafx.tk.Toolkit.checkFxUserThread(Toolkit.java:204)
at com.sun.javafx.tk.quantum.QuantumToolkit.checkFxUserThread(QuantumToolkit.java:364)
at javafx.scene.Parent$2.onProposedChange(Parent.java:364)
at com.sun.javafx.collections.VetoableListDecorator.setAll(VetoableListDecorator.java:113)
at com.sun.javafx.collections.VetoableListDecorator.setAll(VetoableListDecorator.java:108)
at com.sun.javafx.scene.control.skin.LabeledSkinBase.updateChildren(LabeledSkinBase.java:575)
at com.sun.javafx.scene.control.skin.LabeledSkinBase.handleControlPropertyChanged(LabeledSkinBase.java:204)
at com.sun.javafx.scene.control.skin.LabelSkin.handleControlPropertyChanged(LabelSkin.java:49)
at com.sun.javafx.scene.control.skin.BehaviorSkinBase.lambda$registerChangeListener$60(BehaviorSkinBase.java:197)
at com.sun.javafx.scene.control.skin.BehaviorSkinBase$$Lambda$144/1099655841.call(Unknown Source)
at com.sun.javafx.scene.control.MultiplePropertyChangeListenerHandler$1.changed(MultiplePropertyChangeListenerHandler.java:55)
at javafx.beans.value.WeakChangeListener.changed(WeakChangeListener.java:89)
at com.sun.javafx.binding.ExpressionHelper$SingleChange.fireValueChangedEvent(ExpressionHelper.java:182)
at com.sun.javafx.binding.ExpressionHelper.fireValueChangedEvent(ExpressionHelper.java:81)
at javafx.beans.property.StringPropertyBase.fireValueChangedEvent(StringPropertyBase.java:103)
at javafx.beans.property.StringPropertyBase.markInvalid(StringPropertyBase.java:110)
at javafx.beans.property.StringPropertyBase.set(StringPropertyBase.java:143)
at javafx.beans.property.StringPropertyBase.set(StringPropertyBase.java:49)
at javafx.beans.property.StringProperty.setValue(StringProperty.java:65)
at javafx.scene.control.Labeled.setText(Labeled.java:146)
at application.CountdownController$1.run(CountdownController.java:29)
...yes, I am going to read more about threads, but I would like to know the answer to these questions.
A:
Part I
A thread, when created, runs independent of other threads. You have a new thread which has an infinite loop, which implies, it will keep running forever, even after the stage has been closed.
Normally, using a infinite loop is not advised, because breaking out of it is very difficult.
You are advised to use :
TimerTask
ScheduledExecutorService
You can then call either one of them (based on whatever you are using)
TimerTask.cancel()
ScheduledExecutorService.shutdownNow()
when your stage is closed. You can use something like :
stage.setOnCloseRequest(closeEvent -> {
timertask.cancel();
});
JavaFX API's (thanks to James_D comment's)
These do not need to be explicitly canceled as ScheduledService uses daemon threads and AnimationTimer runs on the JavaFX thread.
ScheduledService
AnimationTimer
Part II
Your second part of the question has been answered time and again in the forum.
You need to be on the JavaFX Application thread to use scene graph elements.
Since you have created a new thread and trying to update label, which is a JavaFX node, it throws the exception. For more information, please visit:
JavaFX error when trying to remove shape
Why am I getting java.lang.IllegalStateException "Not on FX application thread" on JavaFX?
Javafx Not on fx application thread when using timer
|
[
"stackoverflow",
"0009328233.txt"
] | Q:
ASP.NET - How to get and set MasterPage attribute in page code behind?
I have an ASP.NET page that uses a MasterPage. I have a public bool attribute in the MasterPage called ShowChangePassword. I want to set this to true or false in the page that uses this MasterPage.
I have the following code, but I can't set the attribute:
var masterPage = this.Master;
masterPage.ShowChangePassword = false;
What code do I need to do this?
A:
You need a typed reference to your master page class. Put something like that in your content page:
<%@ Page masterPageFile="~/MasterPage.master"%>
<%@ MasterType virtualPath="~/MasterPage.master"%>
Then you can do:
this.Master.ShowChangePassword = false;
|
[
"crypto.stackexchange",
"0000029123.txt"
] | Q:
How to propagate error in variable-sized message?
I want to design a scheme to encrypt a variable-length message with a secret key, to provide confidentiality. The message is a short human-readable text string with byte granularity, let's say under 300 bytes. I prefer that the ciphertext is the same length as the plaintext (avoid padding).
Along the way, I also want to support integrity-checking on a best-effort basis. Preferably, I want it so that if any bit of the ciphertext is changed, then the decrypted plaintext will look garbled. (The decrypted output will actually be read by a human, and no automated checking is needed.)
It is not acceptable to append a MAC or any kind of check code due to message size restrictions; valid decryption must be inferred from the garbling of the message itself. CPU time is not a problem as long as it's under 0.1 second. An inefficient but secure scheme is okay, but the scheme should be conceptually simple to describe/audit/implement.
The complicating factor is that the message may be shorter than a block (say 16 bytes, for the AES cipher), which means ciphertext stealing can't be used. (Right?)
I'm aware of these facts already:
Using a stream cipher satisfies the variable-length property but makes the ciphertext very malleable; this is undesirable.
Using a block cipher gives the "decryption garble" property desired.
Using CBC mode instead of ECB will mask repeating patterns in the input.
Ciphertext stealing (for ECB or CBC) makes it possible to not increase the message length - but only if the message is at least one block long.
It's possible to use a keyed hash function / MAC for 3 or 4 rounds to design a custom Feistel network cipher.
It might be possible to use a stream cipher and a bytewise adaptation of the Infinite Garble Extension (IGE) mode to achieve garble propagation.
Designing "home-made" crypto not reviewed by experts is frowned upon and may have subtle and fatal errors.
But I don't know what else I need to know, and how to proceed from here. I can post more details on some of the proposed algorithms (such as IGE and Feistel) if needed.
Addendum:
Feistel network idea: (using Python pseudocode)
Let H(k, m) be a MAC (such as HMAC-SHA-512) with secret key k.
Let M be the message to be encrypted.
Let i = floor(M.length / 2).
Algorithm:
M[0 : i] ^= truncate(H(k, M[i : M.length])) # left half XOR H(right half)
M[i : M.length] ^= truncate(H(k, M[0 : i])) # right half XOR H(left half)
M[0 : i] ^= truncate(H(k, M[i : M.length])) # Round 3 to achieve error propagation
M[i : M.length] ^= truncate(H(k, M[0 : i])) # Round 4 due to recommendations
(If half the message length is less than the MAC/hash length then truncating is easy. But if it's longer then some kind of stretching, i.e. CSPRNG, is needed.)
Second addendum:
I'm leaning towards this solution:
preprocessed = all-or-nothing-transform(message)
ciphertext = preprocessed XOR (stream cipher keystream)
A:
The scenario you're facing is well-known in cryptography. You can't afford expanding the message at all (maybe by some IV). So you can't get strong authentication but have to rely on what is called poor man's authentication, you rely on tampering causing random messages.
Please note that all of the following modes are somewhat block-based, meaning you'd have to use padding (like PKCS#7) to ensure correctness of the data and you have to actually check the padding to protect against POODLE style attacks.
The exact same scenario is given in the full-disk-encryption (FDE) scenario. This gives you four options, plus those added by other answers.
XTS, the standard mode for full-disk encryption. It allows for two tweaks hiding potential patterns. You can use standard padding methods (PKCS#7) if you're not hitting block boundaries, this is the most standard solution but may not scramble enough as it only affects the current block.
PCBC is a mode that propagates errors infinitely into the following decrypted plain texts. It accepts IVs and also hides patterns. This may not be preferable compared to the all-or-nothing transforms but may be the choice if they're too slow. And you may want to avoid this one if possible to not fall against the "same flip attack" although this would still mean that two blocks (32 bytes for AES) are scrambled.
EME, a mode originally specifically designed for full-disk encryption and tackling the poor man's authentication best. It turns every block cipher with block size $n$ into a larger block cipher with block size $n^2$ and accepts tweaks to hide patterns and provide random-access. If you'd use AES this would mean you can encrypt up to 2048 ($=128^2/8$) bytes and make sure the whole block is scrambled if tampered. The main drawbacks are potential patents and the lower speed of requriring double the amount of block cipher calls and a finite field multiplication. EME may not be a good choice if patent-freeness is required.
All or nothing transforms. You can use standard XTS or something like that, but apply an all-or-nothing transform on the plain text to make sure the receiver has the complete and correct set of all blocks. This is somewhat non-standard but looks like the best solution.
|
[
"math.stackexchange",
"0000369407.txt"
] | Q:
Using empirical density function as an estimator of a given probability density
We know empirical distribution function is defined as $F_n(x)=\frac{1}{n}\sum\limits_{i=1}^nI(X_i \leq x)$. Then define empirical density function as $ f_n(x) = \frac{F_n(x+b_n)-F_n(x-b_n)}{2b_n} $ .
I can show by using definition that $2nb_nf_n(x)$ is distributed binomial as $B(n,F(x+b_n)-F(x-b_n))$. Further, $E(f_n(x))=\frac{F(x+b_n)-F(x-b_n)}{2b_n}$. Hence as $b_n\rightarrow 0$, $E(f_n(x))\rightarrow f(x) $.
Now, I wanna show that $Var(f_n(x))\rightarrow0$ if $b_n\rightarrow 0$ and $nb_n\rightarrow\infty $. However, I cant find the exact form of $Var(f_n(x))$ by explicit calculation. Can anyonehelp me with it ?Thanks in advance
A:
Since the random variable $2nb_nf_n(x)$ is binomial $(n,p_n)$ with $$p_n=F(x+b_n)-F(x-b_n)$$ its expectation and variance are $np_n$ and $np_n(1-p_n)$ respectively. Thus, $$\mathbb E(f_n(x))=\frac{np_n}{2nb_n}=\frac{p_n}{2b_n}$$ as you said, and $$\mathrm{var}(f_n(x))=\frac{np_n(1-p_n)}{(2nb_n)^2}=\frac{p_n(1-p_n}{4nb_n^2}$$
Assume that $b_n\to0$. If $f$ is regular enough, then $p_n/b_n\to2f(x)$ hence $\mathbb E(f_n(x))\to f(x)$, as you said. Likewise, $p_n\to0$ hence $$\mathrm{var}(f_n(x))\sim \frac{p_n}{4nb_n^2}\sim\frac{f(x)}{2nb_n}$$ Thus, $\mathrm{var}(f_n(x))\to0$ if and only if $nb_n\to\infty$.
This shows that the regime of interest is indeed $1\ll nb_n\ll n$, for example one can use $b_n=n^{-\beta}$ for any $\beta$ in $(0,1)$.
|
[
"stackoverflow",
"0049919609.txt"
] | Q:
Count urls in chrome browser history that were visited via google search
Is it possible to use Chrome History API to count the number of URLs in the history that were visited using google search engine?
For each visited URL, Chrome History API provides a VisitItem object which can be used to access the visit id of the referrer (referringVisitId attribute).
https://developer.chrome.com/extensions/history#type-VisitItem
If the URL (say https://en.wikipedia.org/wiki/java) is visited (clicked) from google search results, then the value of referringVisitId is always 0 when it should be the visit id of google search results URL. Why is the value of referringVisitId 0 in this case? What is the purpose of referringVisitId attribute?
A:
The Chrome History API does not provide list of visited url on basis of the referrer.
Infact it provides referringVisitId, but not always.
I did a test and found that the Chrome History API will provide a referringVisitId ony if the url is visited in the same tab using an anchor tag inside the referring webpage.
Other wise the referringVisitId will always be 0. See below the screen-shot.
Here is my test code I used grab the history forhttps://en.wikipedia.org/wiki/Java_(programming_language) url:
chrome.history.getVisits({"url":"https://en.wikipedia.org/wiki/Java_(programming_language)"}, function(details){
console.log(details);
});
Separately, If you are trying to build a solution only for Google. I would suggest you to do this using a content-script instead of using Chrome History API. Because, I think it won't fulfill your requirement of grabbing all the links visited through the Google.
|
[
"stackoverflow",
"0032184182.txt"
] | Q:
Run java program with dependencies after compilation with Gradle
I have compiled my java program which has some dependencies using Gradle:
gradle build
When I run my java program:
cd build/classes/main/
java HelloWorldWithLibs
I get:
Exception in thread "main" java.lang.NoClassDefFoundError:
com/google/api/client/json/JsonFactory at
BigQueryStreamTest.main(BigQueryStreamTest.java:11) Caused by:
java.lang.ClassNotFoundException:
com.google.api.client.json.JsonFactory
What is the proper way to run this java program with the libraries required?
PS : All the libs are in parameters of the java command when I run the program in IntelliJ. I would like to be able to do the full process without the help of an IDE.
A:
The easiest way of doing this is to configure a javaexec task within your gradle build
task runMyClass(type: JavaExec, dependsOn: 'compileJava') {
main = 'foo.bar.MyClass'
classpath = sourceSets.main.runtimeClasspath
args 'arg1'
}
Then you can execute it via
gradle runMyClass
|
[
"stackoverflow",
"0029379809.txt"
] | Q:
Replacement of certain percentage of element in vector in r
Please, I want to replace 25% of deleted element in the vector below by maximum of the numbers+550*deleted element
i.e
x=rnorm(100,1,4)
I want to delete 25% of elements in x
I want to replace each of the deleted element of x with
'maximum of x + 550*deleted elements'
Thanks
A:
I am not 100% on exactly what you are asking but hopefully this is what you were after.
n = 100 # size of x
x = rnorm(n,1,4)
Get positions of 25% of elements in x randomly
rep = sample(x = 1:n, size = 0.25*n, replace = FALSE)
Update the elements in x selected for replacement
x[rep] = max(x) + 550*x[rep]
|
[
"salesforce.stackexchange",
"0000019221.txt"
] | Q:
Billing address in Account Object
Is it possible to change the standard billing adress state field to accept picklist values.
A:
Salesforce has a recently introduced State and Country Picklists feature that can be turned on in an org. Note that the state and country fields on both Account and Contact are affected, but in general that is a good thing so users see the same behaviour for fields of the same type.
|
[
"stackoverflow",
"0047719801.txt"
] | Q:
Two label align vertically in tableView cell auto resizing swift
Try to auto resizing two labels align vertically within a cell, is that possible?
In the storyboard, we have to set autolayout, since I want to auto resizing two labels(one is on top of another).
I can't set each height so that the storyboard doesn't know what is the height of these two labels so it shows an error for autolayout.
if I click "Add Missing Constraints" button it will add a height for "subtitle"
or I can set height for "title" rather than "subtitle",
or make "title" equal to "subtitle" it will still accept.
here is the result:
A workaround will be separate these two to a different cell.
Any better solution? or am I just simply forget to do anything?
P.S. I'm using Xcode 8 with iOS 10.3.
A:
Try setting out Proper constraints for both the labels-:
Storyboard-:
Constraints for first label-:
Constraints for second label-:
Controller class(use Automatic Dimensions for table view)-:
class ViewController1: UIViewController {
var dataForTableView:[ProductImage]?
@IBOutlet weak var secondTable: UITableView!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
secondTable.estimatedRowHeight = 96
secondTable.rowHeight = UITableViewAutomaticDimension
// CHECK FOR DATA
//print(dataForTableView?[0].url as Any)
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
}
extension ViewController1 : UITableViewDelegate,UITableViewDataSource{
func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int{
return 1
}
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell{
let cell = tableView.dequeueReusableCell(withIdentifier: "cell1") as! testingCell2
cell.backgroundColor = UIColor.blue;
cell.secondLabel.text = "agfhjsgfhjsgdshgfhjsfjhvhssajs hjfbvhjfbvjhfgafgfhlgkaghkfakfhkflbvhfbvhfvbhfv ah fvfhbvfjhvbfhdavhfvhv"
return cell
}
// Number of sections in table
func numberOfSections(in tableView: UITableView) -> Int {
return 1
}// Default is 1 if not implemented
public func tableView(_ tableView: UITableView, heightForRowAt indexPath: IndexPath) -> CGFloat{
return UITableViewAutomaticDimension
}
}
Output-:
I hope that is what you looking for, let me know if any issues.Thanks
|
[
"stackoverflow",
"0057087445.txt"
] | Q:
Return time difference as integer
I have a script that calculates time difference between today and first post. I need to have a condition where a certain thing happens if result of a calculation based on that time difference is a whole number. When I echo end result of a calculation it shows a whole number. But when I check if it is an integer- nothing happens.
I' tracked down the problem to the time difference not being recognized as an integer, despite being a whole number.
$todays_date = current_time('d-m-Y');
//I will skip WP post loop to save space. It works.
$first_date = get_the_date( 'd-m-Y' );
$count = 3;
$date_diff = strtotime($todays_date) - strtotime($first_date);
$date_diff_val = abs(round($date_diff/86400));
if ($date_diff_val/$count > 1) {
$display_date2 = ($date_diff_val-1)/$count;
if (is_int($display_date2)) {
echo 'works';
}
}
I've tried substituting $date_diff_val for number 10 and it worked. So, clearly the issue is with $date_diff_val, but I can not figure out what is it exactly.
A:
Your problem lies with
abs(round($date_diff/86400))
This will round and take the absolute value of something that may be a float. However, round() returns a floating-value, even if it was rounded to a whole number. abs() returns of the same type (so if it got a float, it returns a float). You can typecast the value to integer, by doing
$date_diff_val = (int)abs(round($date_diff / 86400));
PHP manual on round()
PHP manual on abs()
Live demo showing the resulting float / showing the typecasting
|
[
"superuser",
"0001024865.txt"
] | Q:
For what is the Thunderbolt AIC connector used?
What does "AIC" mean? What does the connector do? Is it like a USB pin header where you can connect for example your USB ports for your case? Or do you need something else?
Also there are expansion cards, but I don't know if you need the AIC connector together with a PCIe 4x slot.
Can also someone explain for what reason you want to connect the displayport input with a graphics card (all on the same machine)?
A:
Mainly, Thunderbolt consists of 4 PCI Express lanes (the PCIe version depends on the Thunderbolt version) and a DisplayPort connection. An Add-in card (AIC) gets these from the PCIe slot and the DisplayPort cable you plug into the card.
However, it is not enough to simply connect the cable to the PCI Express interfaces already present on your mainboard. Therefore, the card makes an additional connection, using the so-called GPIO (General-purpose I/O) header. Unfortunately, the exact nature of this connection is not documented. It is probably used by the chipset to dynamically reassign PCIe lanes as required.
The additional connector is required, the AIC will not work without it.
A:
AIC = Add-In Card
You can transmit DisplayPort signals through Thunderbolt; more tidbits of information are at Can I connect a DisplayPort monitor to the Thunderbolt port on a Mac, and vice-versa?. For later versions of TB, you can daisy-chain displays, which could result in less clutter.
Asus call the AIC connector lead a "system-link cable", which leaves me none-the-wiser as to its purpose.
I do not know if you could operate a TB AIC without the DP input - I do not have the necessary parts to test that, and it could depend on if the driver insisted on having a DP input.
|
[
"stackoverflow",
"0043419256.txt"
] | Q:
My php/sql scripts works on local server but incompletely on live site
I wrote scripts to function as a CMS for 2 sets of databases, an image gallery and menu. They work on local servers that I tested on windows, mac and even linux using xampp and mamp.
But on the live site on GoDaddy, the menu cms can only modify database items and won't add a new one. The gallery cms can only upload images, but also won't add the database item like its supposed to.
I appreciate any tips as to how to debug this issue or any possible solutions (still have a lot to learn about PDOs, php, and sql).
The PDO file I require in(changed database credentials):
<?php
ini_set('display_errors', 'On');
define('APP_ROOT', __DIR__);
define('VIEW_ROOT', APP_ROOT . '/views');
define('BASE_URL', '');
$db = new PDO('mysql:host=127.0.0.1;dbname=menu','sqladmin', 'password');
require 'functions.php';
?>
Here is the image upload script:
<?php
if(isset($_FILES['files'])) {
foreach ($_FILES['files']['name'] as $file=> $name) {
$filename = date('Ymd-His',time()).mt_rand().'-'.$name;
try {
if(move_uploaded_file($_FILES['files']['tmp_name'][$file],'../uploads/'.$filename)){
$stmt = $db->prepare("INSERT INTO multiupload VALUES('',?)");
$stmt->bindParam(1, $filename);
$stmt->execute();
}
} catch(Exception $e) {
echo $e;
}
}
}
?>
and the add item to menu script:
<?php
if (!empty($_POST)) {
$name = $_POST['name'];
$label = $_POST['label'];
$dsc = $_POST['dsc'];
$price = $_POST['price'];
$price2 = $_POST['price2'];
$price3 = $_POST['price3'];
$price4 = $_POST['price4'];
$insertPage = $db->prepare("
INSERT INTO menu (name, label, dsc,price,price2,price3,price4)
VALUE (:name, :label, :dsc, :price,:price2,:price3,:price4)
");
$insertPage->execute([
'name' => $name,
'label' => $label,
'dsc' => $dsc,
'price' => $price,
'price2' => $price2,
'price3' => $price3,
'price4' => $price4,
]);
header('Location: ' . BASE_URL . '/admin/list.php');
}
?>
A:
How I would go about debugging this issue:
Make sure that insert query gets called
Check what execute() returns for this query
Dump query and try executing it in mysql client. Of course replace placeholders with real values.
|
[
"stackoverflow",
"0047146114.txt"
] | Q:
Deserializing newline-delimited JSON from a socket using Serde
I am trying to use serde for sending a JSON struct from a client to a server. A newline from the client to the server marks that the socket is done. My server looks like this
#[derive(Serialize, Deserialize, Debug)]
struct Point3D {
x: u32,
y: u32,
z: u32,
}
fn handle_client(mut stream: TcpStream) -> Result<(), Error> {
println!("Incoming connection from: {}", stream.peer_addr()?);
let mut buffer = [0; 512];
loop {
let bytes_read = stream.read(&mut buffer)?;
if bytes_read == 0 {
return Ok(());
}
let buf_str: &str = str::from_utf8(&buffer).expect("Boom");
let input: Point3D = serde_json::from_str(&buf_str)?;
let result: String = (input.x.pow(2) + input.y.pow(2) + input.z.pow(2)).to_string();
stream.write(result.as_bytes())?;
}
}
fn main() {
let args: Vec<_> = env::args().collect();
if args.len() != 2 {
eprintln!("Please provide --client or --server as argument");
std::process::exit(1);
}
if args[1] == "--server" {
let listener = TcpListener::bind("0.0.0.0:8888").expect("Could not bind");
for stream in listener.incoming() {
match stream {
Err(e) => eprintln!("failed: {}", e),
Ok(stream) => {
thread::spawn(move || {
handle_client(stream).unwrap_or_else(|error| eprintln!("{:?}", error));
});
}
}
}
} else if args[1] == "--client" {
let mut stream = TcpStream::connect("127.0.0.1:8888").expect("Could not connect to server");
println!("Please provide a 3D point as three comma separated integers");
loop {
let mut input = String::new();
let mut buffer: Vec<u8> = Vec::new();
stdin()
.read_line(&mut input)
.expect("Failed to read from stdin");
let parts: Vec<&str> = input.trim_matches('\n').split(',').collect();
let point = Point3D {
x: parts[0].parse().unwrap(),
y: parts[1].parse().unwrap(),
z: parts[2].parse().unwrap(),
};
stream
.write(serde_json::to_string(&point).unwrap().as_bytes())
.expect("Failed to write to server");
let mut reader = BufReader::new(&stream);
reader
.read_until(b'\n', &mut buffer)
.expect("Could not read into buffer");
print!(
"{}",
str::from_utf8(&buffer).expect("Could not write buffer as string")
);
}
}
}
How do I know what length of buffer to allocate before reading in the string? If my buffer is too large, serde fails to deserialize it with an error saying that there are invalid characters. Is there a better way to do this?
A:
Place the TcpStream into a BufReader. This allows you to read until a specific byte (in this case a newline). You can then parse the read bytes with Serde:
use std::io::{BufRead, BufReader};
use std::io::Write;
fn handle_client(mut stream: TcpStream) -> Result<(), Error> {
let mut data = Vec::new();
let mut stream = BufReader::new(stream);
loop {
data.clear();
let bytes_read = stream.read_until(b'\n', &mut data)?;
if bytes_read == 0 {
return Ok(());
}
let input: Point3D = serde_json::from_slice(&data)?;
let value = input.x.pow(2) + input.y.pow(2) + input.z.pow(2);
write!(stream.get_mut(), "{}", value)?;
}
}
I'm being a little fancy by reusing the allocation of data, which means it's very important to reset the buffer at the beginning of each loop. I also avoid allocating memory for the result and just print directly to the output stream.
|
[
"stackoverflow",
"0013924644.txt"
] | Q:
Android DatePickerDialog CalendarView's title Inconsistency
I am creating a DatePickerDialog, like it's done in the documentation. However, I am noticing that the CalendarView's title (i.e. "December 2012" as it would be for today) doesn't change immediately when the year is set in the Spinners. I see that the weeks are changed correctly, and I can set the title on the dialog based on the onSelectedDayChange callback with the appropriate date (month, month day, year, week day). Furthermore, if the month is changed in the Spinners, then the CalendarView is updated immediately. This includes, correctly showing the selected year if the year was changed before the month was changed. And if the CalendarView is scrolled to other months the year also gets adjusted to show the correct year.
This seems to imply that the CalendarView simply isn't redrawing the title (probably optimization?) when the date is getting set. Am I doing something else wrong? Is there a solution to this? Or is it a bug in the implementation?
Here's my code:
public class DatePickerFragment extends DialogFragment
implements DatePickerDialog.OnDateSetListener {
/** Name of the date stored in a {@link Bundle} */
public static final String KEY_DATE = "key.DatePickerFragment.DATE";
@Override
public Dialog onCreateDialog(Bundle icicle) {
final Bundle arguments = getArguments();
Time date = arguments == null
? TimeMachine.getTimeFromArray(getArguments().getIntArray(KEY_DATE))
: null;
if (date == null)
date = TimeMachine.getToday();
_dialog_window = new DatePickerDialog(getActivity(), this, date.year, date.month, date.monthDay);
final CalendarView calendar_view = _dialog_window.getDatePicker().getCalendarView();
calendar_view.setOnDateChangeListener(
new CalendarView.OnDateChangeListener() {
@Override
public void onSelectedDayChange(CalendarView _, int year, int month, int day) {
updateTitle(TimeMachine.getTimeFromArray(new int[]{ year, month, day }));
}
}
);
// Sets the title
updateTitle(date);
// Create a new instance of DatePickerDialog and return it
return _dialog_window;
}
@Override
public void onDateSet(DatePicker _, int year, int month, int day) {
final Time date = new Time();
date.set(day, month, year);
}
private void updateTitle(Time date) {
_dialog_window.setTitle(date.format(" %A, %B %e, %Y"));
}
/** The Dialog window */
private DatePickerDialog _dialog_window;
}
A:
Seeing as there aren't others coming forth with any other answer, I got around this problem by not enabling the CalendarView on the DatePicker, and instead making a custom DialogFragment that had a numerical date picker and a separate CalendarView (actually my own reimplementation based on the real CalendarView to allow tweaks to it's display and Otto event bus). Then I can make sure that the CalendarView is set each time the numeric date picker is set.
|
[
"networkengineering.stackexchange",
"0000009333.txt"
] | Q:
cannot remove vlan names
Hi I was practicing creating vlans and assigning them to ports however I cannot remove the names.
CCNA-SWITCH1#show vlan
VLAN Name Status Ports
---- -------------------------------- --------- -------------------------------
1 default active Fa0/1, Fa0/2, Fa0/3, Fa0/4
Fa0/5, Fa0/6, Fa0/7, Fa0/8
Fa0/9, Fa0/10, Fa0/11, Fa0/12
Fa0/13, Fa0/14, Fa0/15, Fa0/16
Fa0/17, Fa0/18, Fa0/19, Fa0/20
Fa0/21, Fa0/22, Fa0/23, Fa0/24
Gi0/1, Gi0/2
1002 fddi-default act/unsup
1003 trcrf-default act/unsup
1004 fddinet-default act/unsup
1005 trbrf-default act/unsup
CCNA-SWITCH1#show ip interface brief
Interface IP-Address OK? Method Status Protocol
Vlan1 unassigned YES manual administratively down down
Vlan2 unassigned YES manual up down
Vlan3 unassigned YES manual up down
FastEthernet0/1 unassigned YES unset down down
FastEthernet0/2 unassigned YES unset down down
FastEthernet0/3 unassigned YES unset down down
FastEthernet0/4 unassigned YES unset down down
FastEthernet0/5 unassigned YES unset down down
FastEthernet0/6 unassigned YES unset down down
--More--
I tried to remove vlan 2 and 3
i used the command no vlan 2 etc.. but they still showing
A:
It appears you created both the VLANs and the SVI (virtual interfaces).
While the no vlan 2 would (and did according to your show vlan output) remove the VLAN, it won't remove the SVI (which show up when you show the interfaces with show ip interface brief).
To remove the SVI, you need to issue the no interface vlan 2 command as well.
A:
Try no interface vlan 2 and no interface vlan 3 in global config mode.
|
[
"stackoverflow",
"0011927145.txt"
] | Q:
Sliding Side Nav Bar
I need help with making a vertical nav bar that slides to the right to reveal more content.
I'm aiming towards something similar to the blue bar here: http://www.teehanlax.com/labs/. The nav bar slides out (to the right) when the side navigation bar is clicked, and slides back (to the left) when the x button is clicked.
my code is:
<!--Am I implementing the jQuery right?-->
<!DOCTYPE HTML> <html> <head> <title>Nishad</title>
<link rel="stylesheet" href="style.css">
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js" type="text/javascript"> </script>
$(function() { $('#nav').click(function()
{ var leftMargin = ($(this).css('margin-left') === '0px') ? '-150px' : '0px'; $(this).animate({ 'margin-left' : leftMargin }, 500);
});
});
</head> <body> <div id="wrapper">
<div id="nav"></div>
<div id="content"></div>
</div> </body> </html>
A:
If you inspect the element, you can see that the website is making use of a negative value for the navigation's margin-left. When you click the +, they are setting margin-left to 0px.
You can get the click effect by attaching a click event handler. The sliding effect can be done using jQuery's animate(). Below is an example what I just mentioned.
$(function() {
$('#nav').click(function() {
var leftMargin = ($(this).css('margin-left') === '0px') ? '-150px' : '0px';
$(this).animate({ 'margin-left' : leftMargin }, 500);
});
});
#wrapper {
white-space: nowrap;
}
#nav, #content {
height: 500px;
display: inline-block;
}
#nav {
width: 200px;
margin-left: -150px;
cursor: pointer;
background: lightgreen;
}
#content {
width: 500px;
background: lightblue;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div id="wrapper">
<div id="nav"></div><div id="content"></div>
</div>
jsFiddle Demo
|
[
"meta.stackexchange",
"0000148468.txt"
] | Q:
Why was my short answer deleted, but slightly longer answers were not?
The question is here: Is $(function() { }) an exact equivalent to $(document).ready(function() { })
And it boiled down to this:
"Now, the question is: Is $(function() { }) an exact equivalent to $(document).ready(function() { })?"
My deleted answer is here: https://stackoverflow.com/a/12605991/1689607
And it directly answered the question:
"No difference. Exactly the same..............."
Other answers were longer, but the answer to the question was ultimately the same. So why was my short (but entirely on-topic and correct) answer singled out by a moderator?
The reasons given by the link provided do not apply to my answer. https://stackoverflow.com/faq#deletion
A:
Your short answer was awful.
If you can't come up with enough content to meet the minimum length requirements, you should not post. Don't spam garbage characters into your answers to meet the length requirements.
It is never appropriate to post ".............." as part of an answer or anywhere else. There is literally no situation in the English language when this is appropriate. Your answers on Stack Overflow should read like words of reference, not forum postings.
If you need something to fill the space, why not link to a more authoritative source than your own statement? Your answer is more or less useless without something to back it up. As it stands, your answer was far better suited to a comment, since it seems to be you voicing your subjective opinion.
You'll notice that the top-voted and accepted answer contains more than four words and links to the documentation backing up the answer.
A:
Because. It wasn't that useful.........
A:
Two things regarding your issue:
A moderator deleted your answer due to flags on the post. Your 'answer' didn't add anything to the question that hadn't already been answered, and it wasn't even useful enough to post as a comment. You may not agree with the deletion, but understand that if you had provided a useful answer we would not be having this discussion. Other answers were likely not deleted because there weren't flags on them. Since most moderator actions take place in the context of the moderator queue (and not the question page), it's not a stretch to believe that the moderator didn't see the other nearly bad answer on the question.
Arguing with users in comments is seldom the best way to find out why something happened (as was the case before I purged the comments). You may not agree with the action taken, but the top voted answer to this question shows you why the community feels like your answer should have been deleted.
Finally,
If you have an issue with any of the steps taken either in this question or the deletion of your Stack Overflow answer, I invite you to send an email to [email protected]. They handle all moderator complaints.
|
[
"latin.stackexchange",
"0000007634.txt"
] | Q:
Understanding "jam nunc"
The expression (idiom?) jam nunc appears several times in the Vulgata. So far I've seen two common translations. One is that of "now presently". For instance, Exodus 9:19:
(Latin) Mitte ergo jam nunc, et congrega jumenta tua, et omnia quae habes in agro ...
(English, Douay-Rheims) Send therefore now presently, and gather together thy cattle, and all that thou hast in the field ...
Another translation is "here and now". For instance, 1 Samuel 14:33:
(Latin) Nuntiaverunt autem Sauli dicentes quod populus peccasset Domino, comedens cum sanguine. Qui ait : Praevaricati estis : volvite ad me jam nunc saxum grande.
(English, Douay-Rheims) And they told Saul that the people had sinned against the Lord, eating with the blood. And he said: You have transgressed: roll here to me now a great stone.
Now, Wiktionary has some guidelines on the differences between jam and nunc. For instance, the latter website says
"Nunc" always means the literal present or "now"; the other use of "now" is usually translated "iam".
But how are the two together to be normally understood? Is there a rule for this? Is this phrase perhaps an idiom?
A:
Jam nunc is not at all mysterious. It simply, and literally, means 'already now'. An alternative might be to reverse the English words to 'now already', or to say 'even at this moment', or anything similar. It's only slightly different to jam fere, 'just about now'. In each of these phrases either added word serves as an emphasis, and in other contexts (where the English would allow it) might even be translated as 'very'.
Cicero (Att. 1. 8) uses the phrase with clear intent in Hermae tui Pentelici cum capitibus aeneis, de quibus ad me scripsisti, iam nunc me admodum delectant.
A:
Besides "already now", iam nunc (or equivalently, albeit less frequently, nunc iam) also means "now not anymore" in negative sentences, such as in Catullus' Miser Catulle:
Nunc iam illa non vult.
|
[
"stackoverflow",
"0044283210.txt"
] | Q:
Pass php Laravel's APP_URL value to AngularJS
I use AngularJS in my Laravel app.
In AngularJS, I use:
$location.host()
to get the domain of my app (e.g: www.example.com) and get access to my API, e.g:
var url = $location.protocol() + "://" + $location.host() + "/api/products/" ;
Now I changed servers and my main app location went from www.example.com to www.example.com/here. $location.host() gives me www.example.com, but not the /here part.
I am trying to find a way to use Laravel's APP_URL value (which contains my main app location) and pass it to AngularJS:
var app = angular.module("appProducts");
app.controller("CtrlUserSchedule", ['$scope', '$http', '$location', '$modal', function($scope, $http, $location, $modal) {
--> var laravelUrl = ** get php APP_URL value here **;
var apiUrl = laravelUrl + "/api/products/" ;
What's the cleanest way to pass APP_URL to AngularJS. Is there a way to set it globally so APP_URL is available in all my AngularJS apps?
A:
how about simply printint it out in the header?
<script>
window.app_url = {{env('APP_URL')}}
</script>
</head>
Then it will simply be accessible from window.app_url or app_url after that.
|
[
"stackoverflow",
"0020929667.txt"
] | Q:
Adding a JSON object using ng-submit + ng-model
How do I properly add(push) a object(nest) to a json using angular.
See live working demo on: JSbin Link
I have a factory of Airports with a particular structure:
angApp.factory("Airports", function () {
var Airports = {};
Airports.detail = {
"PDX": {
"code": "PDX",
"name": "Portland International Airport",
"city": "Portland"
},
"STL": {
"code": "STL",
"name": "Lambert-St. Louis International Airport",
"city": "St. Louis"
},
"MCI": {
"code": "MCI",
"name": "Kansas City International Airport",
"city": "Kansas City"
}
};
return Airports;
});
Linked with a Controller:
How do i write a proper method to push the input to Airport.detail?
.controller("AirportsCtrl", function ($scope, Airports) {
$scope.formURL = "views/_form.html";
$scope.currentAirport = null;
$scope.airports = Airports;
$scope.setAirport = function (code) {
$scope.currentAirport = $scope.airports.detail[code];
};
$scope.addAirport = function() {
$scope.airports.push();
};
});
HTML:
what do i put into ng-model to push an objected into Airport.details properly
Add Airport
ID:
<div class="form-group">
<label >code:</label><br>
<input class="form-control" type="text" placeholder="eg. PDX">
</div>
<div class="form-group">
<label>Name:</label><br>
<input class="form-control" type="text" ng-model="" placeholder="eg. Portland Intl. Airport">
</div>
<div class="form-group">
<label>City</label><br>
<input class="form-control"type="text" ng-model="" placeholder="eg. Portland">
</div>
<input class="btn btn-primary" type="submit">
</form>
A:
There are a few issues, but the biggest stopper is that the factory is defining an object, not an array. That's why the push won't work.
You're going to need some data to send from the form, so I bound models for your form elements in the HTML tags:
<form ng-submit="addAirport()" ng-model="ap" >
<h4 >Add Airport</h4>
<div class="form-group">
<label>ID:</label><br>
<input class="form-control" type="text" ng-model="ap.id" placeholder="eg. PDX">
</div>
Additional form elements were given models to match, ap.code, ap.name and ap.city. Binding the top-level ap object saves some code later on.
The addAirport function looks like this:
$scope.addAirport = function() {
$scope.airports.detail[$scope.ap.id] = $scope.ap;
delete($scope.ap);
};
That simply adds the $scope.ap (form) data to your $scope.airports.detail object. (the detail object contained the collection). The delete command resets the form.
Here's an updated jsbin, adding airports now works: http://jsbin.com/OGipAVUF/11/edit
|
[
"superuser",
"0001342574.txt"
] | Q:
Missing Email Toolbar
I have a few older Word documents with the following toolbar showing. If I save them in Office 2016 the toolbar is hidden and I do not know how to restore it. Since I did not create the original documents I am not sure what version (probably 2010) and what options were used to create this.
A:
There are no Toolbars in Word 2016 and there hasn't been Word 2003.
To add the ability to Send Emails from Word 2016 you can add the Send function to your QAT (Quick Action Toolbar), see image below, or add it to either an existing or custom tab on Word's ribbon. Also look at the other "Send ..." email functions that are available in the All Commands category of QAT or Ribbon Customization.
|
[
"stackoverflow",
"0057502233.txt"
] | Q:
How to set parameters for SaveAs() dialog in Word.Application?
I want to save a Word document as PDF from a PowerShell script.
The following code works for me.
$Word = New-Object -ComObject Word.Application
$Doc = $Word.Documents.Open("C:\TEMP\WORD.DOCX")
$Name = ($Doc.Fullname).Replace("DOCX", "PDF")
$result = $Doc.SaveAs([ref] $Name, [ref] 17)
$Doc.Close()
echo "Saved to $Name"
The produced PDF is a PDF/A though.
When I save the document manually then I can set the option "PDF/A compliant" in a dialog which pops up.
How can I change this format specific option via PowerShell?
The pictures explain perhaps better what I'm trying.
A:
The only way I know of is by using the ExportAsFixedFormat function instead of SaveAs.
$Word = New-Object -ComObject Word.Application
$Doc = $Word.Documents.Open("C:\TEMP\WORD.DOCX")
$Name = [System.IO.Path]::ChangeExtension($Doc.Fullname, "PDF")
# Use ExportAsFixedFormat function.
# See: https://docs.microsoft.com/en-us/office/vba/api/word.document.exportasfixedformat
# Parameters:
# OutputFileName, ExportFormat, OpenAfterExport, OptimizeFor, Range, From
# To, Item, IncludeDocProps, KeepIRM, CreateBookmarks, DocStructureTags
# BitmapMissingFonts, UseISO19005_1
# The last parameter 'UseISO19005_1' saves as PDF/A Compliant
$result = $Doc.ExportAsFixedFormat(
$Name,
[Microsoft.Office.Interop.Word.WdExportFormat]::wdExportFormatPDF,
$false,
[Microsoft.Office.Interop.Word.WdExportOptimizeFor]::wdExportOptimizeForOnScreen,
[Microsoft.Office.Interop.Word.WdExportRange]::wdExportAllDocument,
0,
0,
[Microsoft.Office.Interop.Word.WdExportItem]::wdExportDocumentContent,
$true,
$true,
[Microsoft.Office.Interop.Word.WdExportCreateBookmarks]::wdExportCreateWordBookmarks,
$true,
$false,
$true
)
$Doc.Close()
# clean up Com object after use
$Word.Quit()
[System.Runtime.Interopservices.Marshal]::ReleaseComObject($Word) | Out-Null
[System.GC]::Collect()
[System.GC]::WaitForPendingFinalizers()
|
[
"stackoverflow",
"0043480362.txt"
] | Q:
dplyr for rowwise quantiles
I have a df of strata, each of which has 1000 samples from a posterior distribution of the estimates from that stratum.
mydf <- as.data.frame(lapply(seq(1, 1000), rnorm, n=100))
colnames(mydf) <- paste('s', seq(1, ncol(mydf)), sep='')
I want to add columns for a few quantiles of the distribution for each row. In classic R, I'd write this.
quants <- t(apply(mydf, 1, quantile, probs=c(.025, .5, .975)))
colnames(quants) <- c('s_lo', 's_med', 's_hi')
mydf <- cbind(mydf, quants)
I suspect there's a direct way to do this in dplyr (maybe rowwise?) but my attempts have failed. Ideas?
A:
dplyr is not optimized for row-based calculations like that. Though you can do this with rowwise(), I recommend against it: performance will be abysmal. Your best speed will likely be with something that expects a matrix, and can operate on the rows. I suggest apply.
Instead of dealing with a 100x1000 data.frame, for brevity I'll go with 5 columns:
set.seed(2)
mydf <- as.data.frame(lapply(seq(1, 5), rnorm, n=10))
colnames(mydf) <- paste('s', seq(1, ncol(mydf)), sep='')
Converting to a matrix is only reasonable if all columns are of the same class. In this case, they are all numeric so we are safe. (If you have non-numeric columns in the dataframe, extract only the ones you need here and bind them back in later.)
mymtx <- as.matrix(mydf)
apply(mymtx, 1, quantile, c(0.1, 0.9))
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
# 10% 1.028912 1.430939 1.999521 0.305907 1.753824 0.03267599 1.934381 1.270504 2.995816 1.489634
# 90% 4.950067 3.807735 4.881554 6.123989 4.886388 5.55628806 4.207605 4.184460 4.406384 3.782134
One notable with using apply like this is that the result is in row-based form, perhaps transposed from what one would expect. Simply wrap it in t(...) and you'll see the columns you might expect.
This can be recombined with the original dataframe using cbind or similar function.
This can be done in a pipeline like so:
mydf %>%
bind_cols(as.data.frame(t(apply(., 1, quantile, c(0.1, 0.9)))))
# s1 s2 s3 s4 s5 10% 90%
# 1 0.1030855 2.4176508 5.0908192 4.738939 4.616414 1.02891157 4.950067
# 2 1.1848492 2.9817528 1.8000742 4.318960 3.040897 1.43093918 3.807735
# 3 2.5878453 1.6073046 4.5896382 5.076164 4.158295 1.99952092 4.881554
# 4 -0.1303757 0.9603310 4.9546516 3.715842 6.903547 0.30590700 6.123989
# 5 0.9197482 3.7822290 3.0049378 3.223325 5.622494 1.75382406 4.886388
# 6 1.1324203 -0.3110691 0.5482936 3.404340 6.990920 0.03267599 5.556288
# 7 1.7079547 2.8786046 3.4772373 2.274020 4.694516 1.93438093 4.207605
# 8 0.7603020 2.0358067 2.4034418 3.097416 4.909156 1.27050387 4.184460
# 9 2.9844739 3.0128287 3.7922033 3.440938 4.815839 2.99581584 4.406384
# 10 0.8612130 2.4322652 3.2896367 3.753487 3.801232 1.48963385 3.782134
I'll leave the column naming up to you.
A:
With data.frame-like structures, it's going to be very hard to do rowwise operations efficiently, due to the nature of the data structure. A more efficient solution is probably to reshape the data, do the calculation blockwise in the column, and then join the result back. With dplyr + tidyr, something like this:
library(dplyr)
library(tidyr)
mydf <- as_data_frame(mydf) %>%
mutate(id = row_number())
quants <- mydf %>%
gather(sample, value, -id) %>%
group_by(id) %>%
summarize(q025 = quantile(value, 0.025),
q500 = quantile(value, 0.5),
q975 = quantile(value, 0.975)) %>%
ungroup()
result <- left_join(quants, mydf)
Or, if speed is particularly important, with data.table...
library(data.table)
setDT(mydf)
mydf[, id := .I]
mydf_melt <- melt(mydf, id.vars = 'id')
quants <- mydf_melt[, as.list(quantile(value, c(0.025, 0.5, 0.975))), by = id]
setkey(quants, 'id')
setkey(mydf, 'id')
result <- quants[mydf]
A:
purrr::pmap can be useful for such cases, iterating in parallel through items in a list, which with a data.frame is operating rowwise. It's more useful if each item contains a parameter or if the function accepts dots, though; otherwise you have to collect a vector with c.
library(tidyverse)
set.seed(47)
mydf <- as.data.frame(lapply(seq(1000), rnorm, n = 100))
names(mydf) <- paste0('s', seq_along(mydf))
# make vector of each row; pass to quantile; convert to list; simplify to data.frame
mydf %>% pmap_df(~as.list(quantile(c(...), c(.025, .5, .975)))) %>%
bind_cols(mydf) # self join to original columns
#> # A tibble: 100 × 1,003
#> `2.5%` `50%` `97.5%` s1 s2 s3 s4
#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 24.52876 501.2313 974.1547 2.99469634 1.857485 4.8062449 5.412425
#> 2 25.96306 501.5381 975.4427 1.71114251 1.534527 5.0045983 4.029735
#> 3 25.36792 499.8048 974.9472 1.18540528 1.575371 2.1515656 4.537178
#> 4 27.15081 500.9932 975.3688 0.71823499 2.747321 0.9841692 3.774623
#> 5 25.77212 498.7223 974.5576 1.10877555 2.659429 4.6865536 5.448446
#> 6 25.43256 501.2437 973.7319 -0.08573747 2.198829 3.7851258 5.769600
#> 7 24.29993 500.8599 975.5050 0.01451784 1.938954 4.1822894 5.205473
#> 8 25.16637 501.8597 974.8636 1.01513086 3.492032 3.2551467 2.570020
#> 9 25.36332 500.3975 973.3588 0.74795410 3.660735 3.3051286 4.270915
#> 10 27.02456 499.8759 974.3890 -0.46575030 2.771156 3.4292355 3.372155
#> # ... with 90 more rows, and 996 more variables: s5 <dbl>, s6 <dbl>,
#> # s7 <dbl>, s8 <dbl>, s9 <dbl>, s10 <dbl>, s11 <dbl>, s12 <dbl>,
#> # s13 <dbl>, s14 <dbl>, ...
The names generated by quantile are not syntactic, but could easily be replaced by inserting set_names(c('s_lo', 's_med', 's_hi')) before bind_cols. There are many other ways to reassemble the results, as well, if you like.
|
[
"scicomp.stackexchange",
"0000021352.txt"
] | Q:
Does OOP work easily and efficiently in parallel computations?
I am working on a task for which the object oriented (OO) approach fits well. I am doing it in MATLAB, since I am in the prototyping phase. However, I know that later I will surely have to perform larger scale computations (my university has a cluster). On method is to use mex functions for the computational heavy parts. But it does not work well for MATLAB classes. The other option is to write the code in C++, whose OOP is different from that of MATLAB, but there are many similarities, and the idea is the same. How easy is to use then OpenMP, MPI, PETSC, etc with C++ classes to parallelize my code? The third option would be to neglect OOP, but then I sacrifice the elegance and extensibility and of my program. My questions:
1) Do you recommend me to remain with OOP, or switch to the procedural way?
2) Which parallelization technique do you recommend me (OpenMP, MPI, PETSC, etc.)? I do not want to invest enormous amount of time in it.
I am quite skilled in MATLAB, but have only a basic knowledge in C and nothing in C++.
EDIT
From one of the comments it turned out that there is no difference if I use standard variables or objects. So to reformulate question 1)
1) Is it sure that OOP does not make my life harder when I am going to do the parallelization? I will create a specific application not a general tool; in this case how much OO C++ is difficult to learn? I won't need special data structures, just the loops, if statements and the call of parallelization libraries. Is that a viable solution just to make the class methods parallel (so that the implementation remains hidden from outside) or a complete rewriting is required?
A:
Empirically, your comment that "most numerical libraries 'do not like' OOP, that is why many software are written in C or Fortran" is not correct. Instead, I would say that almost all software libraries that have been written over the last 20 years are in one way or another object oriented. For example, PETSc is object oriented (even though it is written in C, but it basically uses classes, inheritance, and virtual functions under the hood), and so is the other large linear algebra library, Trilinos (written in C++). In the finite element context, libMesh, FEniCS/Dolphin, and deal.II (my own project) are all written in C++ in an object oriented way. In fact, even MPI is written in an object oriented way, where communicators are like classes and most functions operate on them (in C++ one would write them as member functions).
So in reality, regardless of language, the object oriented programming paradigm has long won the battle in scientific computing. This is true whether you write your code in C++ or in C/Fortran -- basically all codes of significant size use the OOP paradigm.
A:
Without knowing more about your problem, let me just say some general thoughts.
You can definitely parallelize OOP code using either openMP or MPI. In fact basically all of the professional/commercial software solving computational science problems take advantage of the OOP paradigm because OOP provides significant advantages in encapsulation, extensibility, maintainability, modularization, inheritance etc. I should stress that openMP and MPI are fundamentally different parallelization paradigms:
openMP
Shared memory paradigm
Useful for speeding up code on a given processor. For example instead of using just 1 of your 4 cores on a quad care processor you can use all 4 cores.
All your threads see everything - i.e. all cores can read an write to the same arrays, vectors, etc
MPI
Distributed memory paradigm
Useful for breaking up your problem and distributing it across many machines in a cluster.
Each separate node acts like a separate computer. Your problem must be broken up and given to each node/computer in the cluster. That node then works on that data only.
If you need to access data on a different node then you have to do that through communication calls between the nodes.
Communication is relatively expensive and should be minimized. This coupled with the fact that the problem must be broken up means that quite often sequential algorithms must be completely re-thought to work in a distributed environment.
Hybrid
Here we use both MPI and openMP. For example you might break up your problem and distribute it across many nodes on a cluster. Each node might have 4, 6, 8 ,16 or more cores. On each node you then use openMP to gain further speed-up.
I would say that in general parallelization is not a trivial endeavor - especially with MPI. Many times you will need to completely re-think your algorithms so that they work efficiently on a distributed environment. Because of this I think it would be very challenging to learn both C++ and MPI in a short amount of time. C++ alone can take years to truly master. MPI on the other hand isn't difficult from a syntactic perspective - you may only need to know 10-15 subroutine calls - but can be difficult in terms of adjusting your algorithms to handle a distributed memory model.
For a beginner openMP is probably the easiest and is less likely to require fundamental algorithm changes. You have a slow for loop? You might be able to speed this section of the code up with a simple #pragma statement.
Conclusion
I would definitely recommend using OOP if your code is large and you want it to be able to grow and be maintained over time. Heck even if your code is small OOP is great for creating well designed, modularized code. Learning a language like C++ - especially since you know C already - wont be too difficult (but will take time to become an expert) and it will give you dividends over the long haul. If you are serious about parallelizing your code yourself then you will want to learn both openMP and MPI eventually. In the meantime openMP is probably the easiest and if you are using gcc doesn't require installing as it operates by calling pre-processor flags.
|
[
"stackoverflow",
"0034132505.txt"
] | Q:
Frame for "Banner View" will be different at run time. iAd works for largest screen but blank on smaller ones
I've nearly finished my app and the only issue I have is that my iAd banner view will only show on the largest screen.
I have set the constraints so the iAd will fit nicely on smaller screens but when I run it on any size other than the largest, all I see is a white box.
The current code I have is:
-(void)bannerViewDidLoadAd:(ADBannerView *)banner {
[UIView beginAnimations:nil context:NULL];
[UIView setAnimationDuration:1];
[banner setAlpha:1];
[UIView commitAnimations];
}
-(void)bannerView:(ADBannerView *)banner didFailToReceiveAdWithError:(NSError *)error {
[UIView beginAnimations:nil context:NULL];
[UIView setAnimationDuration:1];
[banner setAlpha:0];
[UIView commitAnimations];
}
I'm wondering if I should add a line of code to the bannerViewDidLoadAd to say something about the frame size.
The error I keep getting is:
Frame for "banner view" will be different at run time.
Size will be (320,50) at runtime but is (480,60) in the canvas.
The iAd framework has been imported and the add delegated.
It's within a scroll view if that makes any difference.
All my constraints are fine it's just this frame size that's the issue. if i attempt to update frames nothing happens.
Any help would be greatly appreciated.
Thanks.
A:
Sounds like you have trailing and leading constraints set on your ADBannerView.
Your ADBannerView will know which device it is on and set the dimensions of the ADBannerView accordingly. You should just let Auto Layout know where you want your ad to be. For example, if you wanted your ADBannerView to be at the bottom of your view then you would pin it to the bottom of your view with Bottom Space to: Bottom Layout Guide and align it to Align Center X to: Superview.
Also, don't use the beginAnimations:context: method. From UIView Class Reference:
Use of this method is discouraged in iOS 4.0 and later. You should use
the block-based animation methods to specify your animations instead.
Using block-based animations, your delegate methods would end up looking like this:
-(void)bannerViewDidLoadAd:(ADBannerView *)banner {
[UIView animateWithDuration:1.0 animations:^{
banner.alpha = 1.0;
}];
}
-(void)bannerView:(ADBannerView *)banner didFailToReceiveAdWithError:(NSError *)error {
[UIView animateWithDuration:1.0 animations:^{
banner.alpha = 0.0;
}];
}
|
[
"stackoverflow",
"0021225198.txt"
] | Q:
(Tkinter) Command executed automatically when binding, binding not acting as expected
I am trying to bind a key to a command conditionally. But when I bind, the command gets automatically executed (when I toggle both commands). Why does that happen? And how can I only bind a command without executing it? Also after binding both commands, only the first one (function p) is being executed. Why is that? My code is as follows:
from tkinter import *
top = Tk()
editor = Text(top)
editor.pack()
def br(event=None):
try:
editor.insert(INSERT, "<br />")
except:
pass
def ins_br():
br()
return 'break'
def p(event=None):
try:
editor.insert(INSERT, "\n<p></p>")
return 'break'
except:
pass
def br_and_p(event=None):
br()
p()
def enter_key():
if ins_br_var.get() == 1 and p_var.get() == 1:
editor.bind('<Return>', br_and_p())
elif ins_br_var.get() == 1 and p_var.get() == 0:
editor.bind('<Return>', br)
elif ins_br_var.get() == 0 and p_var.get() == 1:
editor.bind('<Return>', p)
else:
editor.unbind('<Return>')
toolbar = Frame(top, bd=1, relief=RAISED)
toolbar.pack(side=TOP, fill=X)
ins_br_var = IntVar()
ins_br_cbox = Checkbutton(toolbar, text="ins_br", variable=ins_br_var, command=enter_key)
ins_br_cbox.pack(side=LEFT, padx=2, pady=2)
p_var = IntVar()
p_cbox = Checkbutton(toolbar, text="p", variable=p_var, command=enter_key)
p_cbox.pack(side=LEFT, padx=2, pady=2)
A:
Omit the ().
When you do this:
editor.bind('<Return>', br_and_p())
... You are immediately executing the function br_and_ p, and binding the result of that function to the event.
|
[
"math.stackexchange",
"0000504339.txt"
] | Q:
Equivalence of integral and differential forms of transport equation
I would like to prove the equivalence of the differential and integral forms of Reynold's transport equation. This problem is stated in terms of fluid mechanics.
Problem Statement
Let $V$ be a closed volume in $\mathbb{R}^3$, $A$ be the surface of $V$, and $\mathbf{n}$ the normal to A ($\mathbf{n}$ is thus a function of position). Let $F(\mathbf{x}, t)$ be a scalar function of three-dimensional position and time. Also let $\mathbf{u} \in \mathbb{R}^3$ be the
field velocity in $\mathbb{R}^3$.
By letting $V \to 0$, derive this equality:
$$
\frac{DF}{Dt}= \frac{\partial F}{\partial t} + \frac{\partial F}{\partial x_i} u_i
$$
from this one:
$$
\frac{D}{Dt} \int_V F \ dV= \int_V \frac{\partial F}{\partial t}\ dV + \int_A (\mathbf{u} \cdot \mathbf{n})\ F \ dA,
$$
where $D/Dt$ denotes a full derivative and double indices denote summation.
Attempt at a Solution
We first convert the integral over $A$ to one over $V$ by way of the divergence theorem:
$$
\int_A (\mathbf{u} \cdot \mathbf{n})\ F \ dA = \int_A (F \mathbf{u}) \cdot \mathbf{n} \ dA
= \int_V \nabla \cdot (F\mathbf{u}) \ dV.
$$
Expand the divergence:
$$
\int_V \nabla \cdot (F\mathbf{u}) \ dV = \int_V \nabla F \cdot \mathbf{u} \ dV + \int_V (\nabla \cdot \mathbf{u})F \ dV.
$$
Call this result (1).
Although I do not understand why it is true, the text frequently uses equalities of the
form
$$
\lim_{V \to 0} \ \int_V F(\mathbf{x}, t) \ dV = F(\mathbf{x}, t)
$$
and so I intend to do so as well here (aside: I guess it's implicit that $V$ on the LHS shrinks to the point $\mathbf{x}$ on the RHS?).
Substituting result (1) into the integral statement and (admittedly blindly) applying the preceding rule almost gives me what I want:
$$
\frac{DF}{Dt} = \frac{\partial F}{\partial t} + \frac{\partial F}{\partial x_i} u_i +
\lim_{V \to 0} \int_V (\nabla \cdot \mathbf{u})F \ dV.
$$
The remaining limit is the result of the expansion of the gradient in result (1).
Reiteration of the Question
Is the limit in last equation equal to 0, and if so, why, or have I made a mistake?
A:
The statement
$$
\lim_{V \to 0} \int_V F(\mathbf{x}) \,dV = F(\mathbf{x})
$$
is clearly false; I should have caught this error. The correct statement (statement (1)) is
$$
\lim_{V \to 0}\frac{1}{V} \int_V F(\mathbf{x}) \,dV = F(\mathbf{x}).
$$
Follow the original proof in the question statement until
$$
\frac{d}{dt} \int_{V(t)} F \,dV = \int_{V(t)} \frac{\partial F}{\partial t}dV +
\int_{V(t)} \nabla F \cdot \mathbf{u} \,dV + \int_{V(t)} F \nabla \cdot
\mathbf{u} \,dV,
$$
where we have used the divergence theorem and expanded the terms. We will be taking the limit as $V \to 0$; to keep the algebra under control we assume already that we can discard second-order and higher terms in the Taylor expansion of $F$ (formally we would need to write the expansion and discard the terms once we take the limit). In other words, we take $F$ to be constant on $V$. Then we can take $F$ out of the integral on the LHS, and so the equality
$$
\frac{d}{dt} \int_{V(t)} F \,dV = \frac{d}{dt} F \int_{V(t)} \,dV = V\frac{dF}{dt} + F \frac{dV}{dt}
$$
will hold in the limit. Putting this in the statement above, dividing by V, and taking the limit $V \to 0$, we have
$$
\frac{dF}{dt} + F \lim_{V \to 0} \frac{1}{V}\frac{dV}{dt} = \frac{\partial F}{\partial t} + u_i \frac{\partial F}{\partial t} + F\frac{\partial u_i}{\partial x_i}
$$
where we have applied statement (1) to the integrals. Lastly, it is true that
$$
\lim_{V \to 0} \frac{1}{V}\frac{dV}{dt} = \frac{\partial u_i}{\partial x_i},
$$
which we do not prove here (straightforward but a little tedious). Substituting this into the equality above and identifying $\frac{dF}{dt} = \frac{DF}{Dt}$ (a statement that makes me a little uncomfortable) gives the desired equality.
Summary
There were two errors in the original proof. The identity regarding the limit $V\to 0$ of the integrals was incorrect. Secondly, the time derivative cannot be taken through the integral since $V$ depends on time (the entire reason we have a transport equation at all). Correcting this error gives a second term on the LHS which ultimately cancels the unwanted term on the RHS, by way of another identity introduced here.
|
[
"stackoverflow",
"0019823332.txt"
] | Q:
DetailsView Update ItemTemplate on Select
I have a GridView that has records in it. When you select one of the records it updates the detailsview so you can makes changes. I have them set as templates. One of the fields that I am pulling from SQL is a number 1-4 (A priority level). I want it so that when you select the record in the gridview, it shows the data in the Detailsview. Now, as of right now, it showing the number. But I wrote another method to convert the number into text (switch statement) to show what the number mean. This page is written as a ASCX files, just fyi. I need to know what code that I need to on the back end to make it so that my method can switch the number for Label6 it the text.
I omitted some code to save time.
<asp:DetailsView ID="dvRequestDetails" runat="server" AutoGenerateRows="False" CellPadding="4"
DataKeyNames="casenumber" DataSourceID="RequestDetails" Font-Names="Arial" Font-Size="Small"
ForeColor="#333333" GridLines="None" Height="50px" Width="300px"
EnableModelValidation="True"
onpageindexchanging="dvRequestDetails_PageIndexChanging">
<FooterStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" />
<CommandRowStyle BackColor="#E2DED6" Font-Bold="True" />
<EditRowStyle BackColor="WhiteSmoke" />
<RowStyle BackColor="#F7F6F3" ForeColor="#333333" VerticalAlign="Top" />
<PagerStyle BackColor="#284775" ForeColor="White" HorizontalAlign="Center" />
<Fields>
<asp:TemplateField HeaderText="Priority" SortExpression="other">
<EditItemTemplate>
<asp:DropDownList ID="ddlDetailsOther" runat="server" AutoPostBack="True"
SelectedValue='<%# Bind("other") %>' OnSelectedIndexChanged="DropDownList1_SelectedIndexChanged">
<asp:ListItem Value="4" Selected="True">General</asp:ListItem>
<asp:ListItem Value="3">Low Priority</asp:ListItem>
<asp:ListItem Value="2">High Priority</asp:ListItem>
<asp:ListItem Value="1">Critical</asp:ListItem>
</asp:DropDownList>
</EditItemTemplate>
<InsertItemTemplate>
<asp:TextBox ID="TextBox6" runat="server" Text='<%# Bind("other") %>'></asp:TextBox>
</InsertItemTemplate>
<ItemTemplate>
<asp:Label ID="Label6" runat="server" Text='<%# Bind("other") %>' OnDataBound="dvRequestDetails_DataBound"></asp:Label>
</ItemTemplate>
</asp:TemplateField>
</Fields>
<FieldHeaderStyle BackColor="#E9ECF1" Font-Bold="True" />
<HeaderStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" VerticalAlign="Top" />
<AlternatingRowStyle BackColor="White" ForeColor="#284775" />
</asp:DetailsView>
A:
You can call methods in your code behind from the aspx page easy enough.
Code Behind with method to convert priority to a string (I have assumed it is an int)
public partial class DemoPage : System.Web.UI.Page
{
protected string GetPriorityName(object priority)
{
switch ((int)priority)
{
case 1:
return "Critical";
case 2:
return "High Priority";
case 3:
return "Low Priority";
case 4:
return "General";
default:
return "Unknown";
}
}
}
Then in the aspx just call this method passing in your value
<asp:DetailsView ID="dvRequestDetails" runat="server">
<Fields>
<asp:TemplateField HeaderText="Priority" SortExpression="other">
<EditItemTemplate>
<asp:DropDownList ID="ddlDetailsOther" runat="server" AutoPostBack="True"
SelectedValue='<%# Bind("other") %>' OnSelectedIndexChanged="DropDownList1_SelectedIndexChanged">
<asp:ListItem Value="4" Selected="True">General</asp:ListItem>
<asp:ListItem Value="3">Low Priority</asp:ListItem>
<asp:ListItem Value="2">High Priority</asp:ListItem>
<asp:ListItem Value="1">Critical</asp:ListItem>
</asp:DropDownList>
</EditItemTemplate>
<InsertItemTemplate>
<asp:TextBox ID="TextBox6" runat="server" Text='<%# GetPriorityName(Bind("other")) %>'></asp:TextBox>
</InsertItemTemplate>
<ItemTemplate>
<asp:Label ID="Label6" runat="server" Text='<%# GetPriorityName(Bind("other")) %>' OnDataBound="dvRequestDetails_DataBound"></asp:Label>
</ItemTemplate>
</asp:TemplateField>
</Fields>
</asp:DetailsView>
Seeing as you now have the list of priorities twice (once in the aspx and once in the code behind) you would probably be better off making it a list and binding the drop down list to that as well as reading the name from it in-case you decide to change a name / add another priority it only needs to be done in one spot.
|
[
"stackoverflow",
"0016962252.txt"
] | Q:
Use variables from one program in another
I am creating two 2D arrays in one file, say readbundle.cpp. They are quite huge in dimension, as they actually 3D points created from an image, after some heavy mathematics.
Now, how can I use the values from here, in another file, resectioning.cpp? I know this uses the concept of oops and classes. But if you can just temme the probable syntax and how to use them, it would be really helpful. Please help me here. Thanks in advance. I have searched in google, but since I am new to this,I am not sure what am I looking at or where to look. My apologies if you feel this is very rudimentary.
The code is in c++.
They are 2 different files totally. I haven't made them a same program or created any class. This is what I want to know. How to connect them, and how to use the values from one in another.
Suppose X[i][j] is just normally created and defined in readbundle.cpp and after compiling and executing it, I have certain values in X[i][j].
Now I want to use this X[i][j] in the program, resectioning.cpp, which I am compiling separately. I haven't defined any class, or any oops. Please help me how can I achieve this. These two programs aren't connected as of now. They are just normal programs. I know I have to do something public and call the variable somehow. But I just dunno the right way of doing it. Please correct me if am wrong as well.
Edit
suppose the readbundle.cpp is as follows
#include ...
.
.
vector<vector<int> >X3D;(declared global)
.
.
and the resect.cpp is as follows
#include ....
.
.
.
extern vector<vector<int> >X3D //will the values be reflected here?
//the values changed in *readbundle.cpp*
int main()
{
cout<<X3D[0][0]<<endl;
}
From the answers, I hope I understood the concept of extern correct. Please help me if am wrong.
NOTE
Will I get the same values X3D from the previous file? Am getting an error in this case. How can I achieve the functionality am looking for?
A:
You can do this like this:
In your arrayfile you declare the array however it looks like, i.E.
char MyArray[] = { 123, 20, -4, ...};
unsigned int MyArraySize = sizeof(MyArray);
In your sourcefile you can reference it like this:
extern char MyArray[];
extern unsigned int MyArraySize;
If you know the size in advance you can do
char MyArray[10] = ...;
and
extern char MyArray[10];
Update
Here is a sample code to get you started.
Create the files as indicated. If you change the names don't forget to change the include directive as well. ;)
ArrayFile.h:
#ifndef ARRAYFILE_H_
#define ARRAYFILE_H_
#define ARRAY_ROWS 5
#define ARRAY_COLUMNS 3
#endif /* ARRAYFILE_H_ */
ArrayFile.cpp:
#include "ArrayFile.h"
char BundleArray[ARRAY_ROWS][ARRAY_COLUMNS] =
{
{ 1, 2, 3 },
{ 2, 4, 5 },
{ 3, 6, 7 },
{ 4, 8, 9 },
{ 5, 0, 6 },
};
ArraySample.cpp
#include <iostream>
#include "ArrayFile.h"
extern char BundleArray[ARRAY_ROWS][ARRAY_COLUMNS];
int main()
{
unsigned int rows = ARRAY_ROWS;
unsigned int columns = ARRAY_COLUMNS;
for(unsigned int x = 0; x < rows; x++)
{
std::cout << "Row:" << x << std::endl;
for(unsigned int y = 0; y < columns; y++)
{
std::cout << "Value [" << x << "/" << y << "] = " << (char)((BundleArray[x][y])+0x30) << std::endl;
}
}
return 0;
}
When you created these files, compile them.
|
[
"stackoverflow",
"0022797966.txt"
] | Q:
Unclassifiable statement in Fortran 95 when declaring function
I would appreciate some help on this. The point of the program is to take a lower bound, an upper bound, and the number of steps, and then input them into the respective chosen equation to create a matrix that is the length of the steps, that contains the values for for the equation that was chosen. I'm getting an unclassifiable statement at:
array(i)=function1(x)
array(i)=function2(x)
array(i)=function3(x)
I feel like it has something to do with how I declared my function, but I cannot figure out a fix to it. Any help would be appreciated.
PROGRAM stuff1
IMPLICIT NONE
!variables
INTEGER::i,step
REAL::lower,upper,function1,function2,function3,x,q,w,e
CHARACTER(20)::option
REAL,ALLOCATABLE::array(:)
!formats
101 FORMAT(A) !single text element only
102 FORMAT() ! <description>
!-------Variable Definitions-------!
!
!
!
!----------------------------------!
!x= .1(upper-lower)
!<Begin Coding Here>
WRITE(*,101)"Hello User, please select a function to evaluate:"
WRITE(*,101)
WRITE(*,101)"A) f(x)=x^2+2x+4"
WRITE(*,101)"B) f(x)=|x+4|"
WRITE(*,101)"C) f(x)=sin(x)+42"
WRITE(*,101)"Enter A,B,or C"
DO
READ(*,101)option
IF ((option.EQ."A") .OR. (option.EQ."a")) THEN
ELSE IF((option.EQ.'B') .OR. (option.EQ.'b'))THEN
ELSE IF((option.EQ.'c') .OR. (option.EQ.'c'))THEN
ELSE
WRITE(*,*)"Please enter A,B,or C"
CYCLE
END IF
EXIT
END DO
WRITE(*,101)"please enter an lower bound:"
READ(*,*)lower
WRITE(*,101)
WRITE(*,101)"please enter an upper bound:"
READ(*,*)upper
WRITE(*,101)
WRITE(*,101)"please enter a step size"
READ(*,*)step
function1=((x**2)+(2*x)+4)
function2=(abs(x+4))
function3=(sin(x)+42)
ALLOCATE(array(step))
x=lower
DO i=1,step
IF ((option.EQ."A") .OR. (option.EQ."a")) THEN
array(i)=function1(x)
ELSE IF((option.EQ.'B') .OR. (option.EQ.'b'))THEN
array(i)=function2(x)
ELSE IF((option.EQ.'c') .OR. (option.EQ.'c'))THEN
array(i)=function3(x)
END IF
x=x+(upper-lower)/step
END DO
DO i=1,step
WRITE(*,'(4F6.2)')array(i)
END DO
END PROGRAM
A:
These lines
function1=((x**2)+(2*x)+4)
function2=(abs(x+4))
function3=(sin(x)+42)
appear to be three statement functions. This is an obsolescent feature which you should not use, instead you should define functions along the lines
real function one(x)
real, intent(in) :: x
one = x**2 + 2*x + 4
end function one
If you must program like it's 1979 then the correct form for a statement function would be
function1(x)=((x**2)+(2*x)+4)
You have omitted the dummy argument in the definitions, it's no surprise to me that the compiler gets angry and issues that error.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.