高校数学の質問スレ(医者・東大卒専用) Part438 (899レス)
高校数学の質問スレ(医者・東大卒専用) Part438 http://rio2016.5ch.net/test/read.cgi/math/1723152147/
上
下
前
次
1-
新
通常表示
512バイト分割
レス栞
796: 132人目の素数さん [sage] 2025/04/24(木) 11:56:46.69 ID:qcnhUa6a solve = \(m, N=100, n=10, r=4){ library(gmp) Akadama = \(R) { valid = (R >= r) & ((N - R) >= (n - r)) result = rep(as.bigz(0), length(R)) result[valid] = chooseZ(R[valid], r) * chooseZ(N - R[valid], n - r) result / chooseZ(N, n) } numerator = Akadama(m) candidates = 0:N denominator = sum(Akadama(candidates)) numerator / denominator } solve(50:100) |> sum() http://rio2016.5ch.net/test/read.cgi/math/1723152147/796
797: 132人目の素数さん [sage] 2025/04/27(日) 11:41:50.98 ID:sC6S4NLE ダランベールの判定法とコーシーの判別法で求めたべき級数の収束半径は一致しますか?一致するならその証明方法の概略を教えてください。 http://rio2016.5ch.net/test/read.cgi/math/1723152147/797
798: 132人目の素数さん [sage] 2025/04/29(火) 10:05:00.96 ID:pY4WJf3b alphaResult = FindRoot[ CDF[BetaDistribution[{x, 5*x}], 1/5] - CDF[BetaDistribution[{x, 5*x}], 1/7] == 0.95, {x, 15} (* 初期値を設定 *) ]; alpha = x /. alphaResult; beta = 5 * alpha; fp[ones_] := CDF[BetaDistribution[{alpha + ones, beta}], 1/7] + (1 - CDF[BetaDistribution[{alpha + ones, beta}], 1/5]); ones = 1; Until[fp[ones] >= 0.5, ones++ ]; Print[ones] http://rio2016.5ch.net/test/read.cgi/math/1723152147/798
799: 132人目の素数さん [sage] 2025/04/29(火) 10:38:26.36 ID:pY4WJf3b questtion="良品サイコロは1の目の出る確率の95%信頼区間が[1/7,1/5]に収まるサイコロと定義する。それ以外はFランサイコロと呼ぶ。 チンパンフェチが発狂してサイコロを齧ってしまった。 このサイコロが良品のままかFランかを調べたい。 (1)2回投げたらどちらも1の目がでたときこのサイコロがFランである確率を求めよ。 (2)最初から何回1の目が続いてでたらFランである確率が0.5を超えるか? 計算に必要な条件は適宜決定してよい。 " alpha=uniroot
(function(x) pbeta(1/5,x,5*x)-pbeta(1/7,x,5*x) - 0.95,c(1,500))$root beta=5*alpha fp=\(ones) pbeta(1/7,alpha+ones,beta) + pbeta(1/5,alpha+ones,beta,lower=FALSE) fp(2) ones=1 while(fp(ones)<0.5) ones=ones+1 ones http://rio2016.5ch.net/test/read.cgi/math/1723152147/799
800: 132人目の素数さん [sage] 2025/04/29(火) 18:09:40.80 ID:pY4WJf3b library(RcppAlgos) library(fmsb) library(matrixStats) N <- 50 # Generate all combinations cm <- comboGeneral(0:N, 3, repetition = FALSE) # Pre-allocate n_vec <- rep(N, 3) success <- cm failure <- matrix(n_vec, nrow = nrow(cm), ncol = 3, byrow = TRUE) - cm # Modified fast Fisher function - correct implementation fast_fisher <- function(success, failure) { # Initialize p-value matrix pvals <- matrix(NA_real_, nro
w = nrow(success), ncol = 3) # Perform pairwise comparisons for (i in 1:nrow(success)) { # 1 vs 2 pvals[i,1] <- fisher.test(matrix(c(success[i,1], failure[i,1], success[i,2], failure[i,2]), nrow = 2))$p.value # 1 vs 3 pvals[i,2] <- fisher.test(matrix(c(success[i,1], failure[i,1], success[i,3], failure[i,3]), nrow = 2))$p.value # 2 vs 3 pvals[i,3] <- fisher.test(matrix(c(success[i,2], failure[i,2], success[i,3], failure[i,3]), nrow = 2))$p.value } # Bonferroni adjustment pmin(pvals * 3, 1) # Ca
p at 1 after adjustment } # Run with timing system.time({ # Overall Fisher tests overall_p <- apply(cbind(success, failure), 1, function(x) { fisher.test(matrix(x, nrow = 2))$p.value }) # Pairwise Fisher tests pairwise_p <- fast_fisher(success, failure) min_pairwise_p <- rowMins(pairwise_p, na.rm = TRUE) # Filter condition keep <- overall_p > 0.05 & min_pairwise_p < 0.05 result <- cm[keep, ] }) # Print first few results head(result) nrow(result) # Number of qualifying combinations
http://rio2016.5ch.net/test/read.cgi/math/1723152147/800
801: 132人目の素数さん [sage] 2025/04/29(火) 19:11:07.26 ID:A0Mypkqe >>800 高校生が解く証明問題が解けずにコソコソ書き込みかよ http://rio2016.5ch.net/test/read.cgi/math/1723152147/801
802: 132人目の素数さん [sage] 2025/04/29(火) 21:05:19.52 ID:pY4WJf3b options(warn = -1) library(RcppAlgos) N=50 alpha=0.01 cm=comboGeneral(0:N,3,repetition=FALSE) f=\(x,Yates=FALSE){ n=rep(N,3) pc=chisq.test(rbind(x,n-x),correct=Yates)$p.value pps=as.vector(pairwise.prop.test(x,n,correct=Yates, p.adj="bon")$p.value) minp=min(pps,na.rm=TRUE) # pf>0.05 & minp<0.05 c(pc,minp) } f(c(18,25,33)) " re=t(apply(cm,1,f)) ans=cm[apply(re,1,function(x) x[1]>alpha & x[2]<alpha),] head(an
s);tail(ans) mid=round(nrow(ans)/2) ans[(mid-3):(mid+3),] ansp=t(apply(ans,1,f)) head(ansp) ; tail(ansp) P_all=ansp[,1] P_pair=ansp[,2] plot(P_all,P_pair,pch=16,byt='l') hist(P_all) hist(P_pair) summary(P_all) summary(P_pair) " http://rio2016.5ch.net/test/read.cgi/math/1723152147/802
803: 132人目の素数さん [sage] 2025/04/30(水) 02:47:22.48 ID:wedVH8wl >>801 スレタイ読んだら。 Fラン卒は別スレだぞ。 http://rio2016.5ch.net/test/read.cgi/math/1723152147/803
804: 132人目の素数さん [sage] 2025/04/30(水) 04:58:45.74 ID:wedVH8wl options(warn = -1) alpha=0.05 sim=\(N=100){ A=sample(1:(N-2),1) # Ensure A leaves enough room for B and C remaining = N - A if(remaining > 1){ B=sample(1:(remaining-1),1) # Ensure B is at least 1 C=N-A-B ABC=c(A,B,C) abc=sapply(ABC,\(x) if(x > 0) sample(1:x,1) else 0) # Handle case where a group size might be 0 (though unlikely with the new A and B sampling) x=abc n=ABC contig=rbind(s=x,f=n-x) pg=chisq.test(contig,correct=FALSE)$
p.value minpp=min(as.vector( pairwise.prop.test(x,n,correct=FALSE,p.adj="bon")$p.value),na.rm=TRUE) list(pg=pg,minpp=minpp,x=x,n=n) } else { # If A is too large, return NULL or some indicator of an invalid simulation return(NULL) } } res = sim() while(is.null(res) || res$pg < alpha | res$minpp > alpha) { res = sim() } res http://rio2016.5ch.net/test/read.cgi/math/1723152147/804
805: 132人目の素数さん [sage] 2025/04/30(水) 05:14:16.47 ID:wedVH8wl options(warn = -1) alpha=0.05 sim=\(N=100){ A=sample(1:(N-2),1) # Ensure A leaves enough room for B and C remaining = N - A if(remaining > 1){ B=sample(1:(remaining-1),1) # Ensure B is at least 1 C=N-A-B ABC=c(A,B,C) abc=sapply(ABC,\(x) if(x > 0) sample(1:x,1) else 0) # Handle case where a group size might be 0 (though unlikely with the new A and B sampling) x=abc n=ABC contig=rbind(s=x,f=n-x) pg=chisq.test(contig,correct=FALSE)$
p.value minpp=min(as.vector( pairwise.prop.test(x,n,correct=FALSE,p.adj="bon")$p.value),na.rm=TRUE) list(pg=pg,minpp=minpp,x=x,n=n) } else { # If A is too large, return NULL or some indicator of an invalid simulation return(NULL) } } res = sim() while(is.null(res) || res$pg < alpha | res$minpp > alpha) { res = sim() } res res = sim() while(is.null(res) || res$pg > alpha | res$minpp < alpha) { res = sim() } res http://rio2016.5ch.net/test/read.cgi/math/1723152147/805
806: 132人目の素数さん [sage] 2025/04/30(水) 06:57:02.91 ID:wedVH8wl おっしゃる通り、カイ二乗検定で連続性補正を外せば、より小さな差でも有意になりやすいため、ご要望のデータを作成できる可能性は高まります。しかし、Fisherの正確確率検定は、まさにその正確性ゆえに、サンプルサイズが小さい場合や比率の差が小さい場合に、p値が離散的になり、Bonferroni補正のような厳しい多重比較補正を乗り越えて有意差を示すのが難しい傾向があります。 Fisherの正確確率検定は、周辺度数を固定した条件下での確率に基づいてp値を計算するため、ど
うしても「わずかな差」が有意になりにくいという特性があります。特に、全体で有意差が出ない程度に比率の差を抑えようとすると、ペアワイズでも同様に差が小さくなり、Bonferroni補正によって有意水準が厳しくなるため、有意差を検出するのが非常に困難になります。 連続性補正は、カイ二乗分布の連続近似を離散的なデータに適用する際の誤差を小さくするための調整ですが、これを外すことで、p値が小さくなりやすくなります。一方、Fisherの正確確率検定はその性質上、近似を用いないため、連続性補正のような概念がありません。 結論として、ご指
摘の通り、「全体のFisherの正確確率検定で有意差がないのに、ペアワイズなFisherの正確確率検定だとどれかに有意差がある(Bonferroni補正あり)」という条件を満たすデータを作成するのは、統計的な制約から非常に困難であると言わざるを得ません。 やっぱり、Bonferroniの壁はFisherでは乗り越えられようだ。 http://rio2016.5ch.net/test/read.cgi/math/1723152147/806
807: 132人目の素数さん [sage] 2025/04/30(水) 07:36:17.71 ID:IZhDMqNd >>803 スレタイ読めないのはアンタだぞマヌケ http://rio2016.5ch.net/test/read.cgi/math/1723152147/807
808: 132人目の素数さん [sage] 2025/04/30(水) 08:07:09.90 ID:wedVH8wl ここはFランアクセス禁。 http://rio2016.5ch.net/test/read.cgi/math/1723152147/808
809: 132人目の素数さん [sage] 2025/04/30(水) 08:07:44.58 ID:wedVH8wl options(warn = -1) alpha <- 0.05 sim_fisher <- function(N = 100) { # Function to simulate data for three groups and perform Fisher's exact test. # N: Total sample size. # Determine sample sizes for each of the three groups. A <- sample(1:(N - 2), 1) # Ensure A leaves enough room for B and C. remaining <- N - A if (remaining > 1) { B <- sample(1:(remaining - 1), 1) # Ensure B is at least 1. C <- N - A - B ABC <
- c(A, B, C) # Vector of group sizes. # Randomly generate the number of successes for each group (cannot exceed group size). abc <- sapply(ABC, function(x) if (x > 0) sample(0:x, 1) else 0) x <- abc # Vector of number of successes per group. n <- ABC # Vector of total samples per group. # Create a data frame for the overall Fisher's exact test. data_all <- data.frame( group = factor(rep(c("A", "B", "C"), times = n)), success = unlist(sapply(1:3, function(i) c(r
ep(1, x[i]), rep(0, n[i] - x[i])))) ) table_all <- table(data_all$group, data_all$success) fisher_pg <- fisher.test(table_all)$p.value # P-value of the overall Fisher's exact test. # Perform pairwise Fisher's exact tests with Bonferroni correction. pairwise_p_values <- numeric(3) pairs <- combn(levels(data_all$group), 2, simplify = FALSE) for (i in seq_along(pairs)) { pair <- pairs[[i]] subset_data <- subset(data_all, group %in% pair) table_pair <- table(subset_data$group, subset_data
$success) pairwise_p_values[i] <- fisher.test(table_pair)$p.value # P-value of the pairwise Fisher's exact test. } min_pairwise_p_bonf <- min(p.adjust(pairwise_p_values, method = "bonferroni"), na.rm = TRUE) # Minimum Bonferroni-corrected p-value from pairwise tests. list(fisher_pg = fisher_pg, min_pairwise_p_bonf = min_pairwise_p_bonf, x = x, n = n) } else { return(NULL) # Return NULL if group sizes are invalid. } } http://rio2016.5ch.net/test/read.cgi/math/1723152147/809
810: 132人目の素数さん [sage] 2025/04/30(水) 08:07:51.54 ID:wedVH8wl # Find data where the overall Fisher's exact test is not significant, # but at least one pairwise Fisher's exact test (with Bonferroni correction) is significant. res_no_overall_sig_pairwise_sig <- NULL while (is.null(res_no_overall_sig_pairwise_sig) || res_no_overall_sig_pairwise_sig$fisher_pg > alpha || res_no_overall_sig_pairwise_sig$min_pairwise_p_bonf > alpha) { res_no_overall_sig_pairwise_sig <- sim_fisher() } cat(&qu
ot;Data where overall Fisher's test is not significant, but pairwise is:\n") print(res_no_overall_sig_pairwise_sig) cat("\n") # Find data where the overall Fisher's exact test is significant, # but all pairwise Fisher's exact tests (with Bonferroni correction) are not significant. res_overall_sig_no_pairwise_sig <- NULL while (is.null(res_overall_sig_no_pairwise_sig) || res_overall_sig_no_pairwise_sig$fisher_pg < alpha || res_overall_sig_no_pairwise_sig$min_pairwise_p_bonf < alpha)
{ res_overall_sig_no_pairwise_sig <- sim_fisher() } cat("Data where overall Fisher's test is significant, but pairwise is not:\n") print(res_overall_sig_no_pairwise_sig) options(warn = 0) http://rio2016.5ch.net/test/read.cgi/math/1723152147/810
811: 132人目の素数さん [sage] 2025/04/30(水) 08:12:32.33 ID:wedVH8wl options(warn = -1) alpha <- 0.05 sim_chisq <- function(N = 100) { # Function to simulate data for three groups and perform Chi-squared test (without Yates' correction). # N: Total sample size. # Determine sample sizes for each of the three groups. A <- sample(1:(N - 2), 1) # Randomly select a size for group A, ensuring space for B and C. remaining <- N - A if (remaining > 1) { B <- sample(1:(remaining - 1), 1) # Ran
domly select a size for group B, ensuring space for C. C <- N - A - B # Calculate the size for group C. ABC <- c(A, B, C) # Vector containing the sample sizes of the three groups. # Randomly generate the number of successes for each group (must be between 0 and the group size). abc <- sapply(ABC, function(x) if (x > 0) sample(0:x, 1) else 0) x <- abc # Vector containing the number of successes for each group. n <- ABC # Vector containing the total number of trials for each group. # Creat
e a contingency table for the overall Chi-squared test. contig_all <- rbind(s = x, f = n - x) # Rows: successes (s), failures (f); Columns: groups. chisq_pg <- chisq.test(contig_all, correct = FALSE)$p.value # Perform Chi-squared test (no correction) and get the p-value. # Perform pairwise proportion tests with Bonferroni correction. pairwise_prop_p_values <- as.vector( pairwise.prop.test(x, n, correct = FALSE, p.adj = "bon")$p.value ) # Perform pairwise proportion tests (no correction)
and get Bonferroni-adjusted p-values. min_pairwise_p_bonf <- min(pairwise_prop_p_values, na.rm = TRUE) # Get the minimum of the adjusted pairwise p-values. # Return a list containing the overall p-value, the minimum Bonferroni-corrected pairwise p-value, successes, and total trials. list(chisq_pg = chisq_pg, min_pairwise_p_bonf = min_pairwise_p_bonf, x = x, n = n) } else { return(NULL) # Return NULL if the group sizes are invalid. } } http://rio2016.5ch.net/test/read.cgi/math/1723152147/811
812: 132人目の素数さん [sage] 2025/04/30(水) 08:12:39.62 ID:wedVH8wl # Find data where the overall Chi-squared test is not significant (p > alpha), # but at least one pairwise proportion test (with Bonferroni correction) is significant (p <= alpha). res_no_overall_sig_pairwise_sig <- NULL while (is.null(res_no_overall_sig_pairwise_sig) || res_no_overall_sig_pairwise_sig$chisq_pg > alpha || res_no_overall_sig_pairwise_sig$min_pairwise_p_bonf > alpha) { res_no_overall_sig_pairwise_sig <-
sim_chisq() # Keep simulating until the condition is met. } cat("Data where overall Chi-squared test is not significant, but pairwise proportion test is:\n") print(res_no_overall_sig_pairwise_sig) cat("\n") # Find data where the overall Chi-squared test is significant (p <= alpha), # but all pairwise proportion tests (with Bonferroni correction) are not significant (p > alpha). res_overall_sig_no_pairwise_sig <- NULL while (is.null(res_overall_sig_no_pairwise_sig) || res_overa
ll_sig_no_pairwise_sig$chisq_pg < alpha || res_overall_sig_no_pairwise_sig$min_pairwise_p_bonf < alpha) { res_overall_sig_no_pairwise_sig <- sim_chisq() # Keep simulating until the condition is met. } cat("Data where overall Chi-squared test is significant, but pairwise proportion test is not:\n") print(res_overall_sig_no_pairwise_sig) options(warn = 0) http://rio2016.5ch.net/test/read.cgi/math/1723152147/812
813: 132人目の素数さん [sage] 2025/04/30(水) 08:13:09.55 ID:wedVH8wl コメントが長すぎて読みにくくなった。 http://rio2016.5ch.net/test/read.cgi/math/1723152147/813
814: 132人目の素数さん [sage] 2025/04/30(水) 08:38:32.10 ID:IZhDMqNd >>808 アンタ日本語も読めないからfランですらないねw あと高校生が解く証明問題すら解けないのに医者東大卒なわけねーだろタコ http://rio2016.5ch.net/test/read.cgi/math/1723152147/814
815: 132人目の素数さん [sage] 2025/05/01(木) 09:07:57.06 ID:L1qIlz9/ ディリクレ事前分布のパラメータαを階層化することで、より信頼性の高いベイズ推定が可能となる。 特にこの問題のように「実際に歪んでいる可能性がある」かつ「繰り返しが少ない」ケースでは、階層ベイズモデルはより適切な枠組みです。 http://rio2016.5ch.net/test/read.cgi/math/1723152147/815
816: 132人目の素数さん [sage] 2025/05/01(木) 09:09:32.28 ID:L1qIlz9/ Stanで作ったらコンパイルに時間がかかる。簡単なモデルはJAGSの方がいい。離散変数も扱えるし。 # JAGS model library(rjags) # Prepare the data outcome_data <- c(rep(1, 17), rep(2, 21), rep(3, 15), rep(4, 21), rep(5, 20), rep(6, 6)) N <- length(outcome_data) data_jags <- list(outcome = outcome_data, N = N) # Initial values (adjust as needed) inits_jags <- list( list(alpha = rep(1, 6), eta = 1), list(alpha = runif(6, 0.1, 2), et
a = 5) ) # Compile the model model_jags <- jags.model( file = "hierarchical_dice_model.jag", data = data_jags, n.chains = 2, n.adapt = 1000 ) # Sampling samples_jags <- coda.samples( model = model_jags, variable.names = c("prob_simplex", "alpha", "eta"), n.iter = 4000 ) # Summary of the results cat("\nJAGS Sampling Results Summary:\n") summary(samples_jags) # Extract posterior samples (prob_simplex) prob_simplex_posterior_jags <- as.matrix(sampl
es_jags[, grep("prob_simplex", varnames(samples_jags))]) head(prob_simplex_posterior_jags) # Plotting (example: posterior distribution of probabilities for each outcome) cat("\nPosterior Distribution Plots for Each Outcome:\n") par(mfrow = c(2, 3)) for (i in 1:6) { plot(prob_simplex_posterior_jags[, i], type = "l", main = paste("Prob[", i, "]"), xlab = "Iteration", ylab = "Probability") abline(h = 1/6, col = "red", lty = 2) }
dice_prob_mean=prob_simplex_posterior_jags colors <- c("skyblue", "lightcoral", "lightgreen", "gold", "lightsalmon", "lightcyan") for (i in 1:ncol(dice_prob_mean)) { BEST::plotPost(dice_prob_mean[, i], compVal=1/6, xlab=paste("pip ", i), xlim=c(0, 0.4), main="", col=colors[i], border="black") } http://rio2016.5ch.net/test/read.cgi/math/1723152147/816
817: 132人目の素数さん [sage] 2025/05/01(木) 09:14:13.25 ID:L1qIlz9/ 事後分布が出せたのであとはオッズ比などの計算も容易。 https://i.imgur.com/k0iLH6B.png http://rio2016.5ch.net/test/read.cgi/math/1723152147/817
818: 132人目の素数さん [sage] 2025/05/01(木) 11:18:32.55 ID:L1qIlz9/ ニッチな値の探索処理が終了しないコード rm(list=ls()) library(fmsb) library(parallel) alpha <- 0.05 # Function to perform a single simulation sim_single <- function(N = 100) { A <- sample(1:N, 1) while (A > N - 2) A <- sample(1:N, 1) B <- sample(N - A - 1, 1) C <- N - A - B ABC <- c(A, B, C) abc <- sapply(ABC, function(x) sample(x, 1)) x <- abc n <- ABC m <- rbind(s = x, f = n - x) bonf_res <-
pairwise.fisher.test(x, n, p.adj = 'bonf') holm_res <- pairwise.fisher.test(x, n, p.adj = 'holm') fdr_res <- pairwise.fisher.test(x, n, p.adj = 'fdr') none_res <- pairwise.fisher.test(x, n, p.adj = 'none') bonf <- min(bonf_res$p.value, na.rm = TRUE) holm <- min(holm_res$p.value, na.rm = TRUE) fdr <- min(fdr_res$p.value, na.rm = TRUE) none <- min(none_res$p.value, na.rm = TRUE) list(m = m, bonf = bonf, holm = holm, fdr = fdr, none = none) } # Function to find a result that meets the c
riteria using parallel processing find_result_parallel_loop <- function(alpha = 0.05, num_cores = detectCores() - 1) { # Create a cluster of worker processes cl <- makeCluster(num_cores) # Ensure the cluster is stopped when the function exits on.exit(stopCluster(cl)) # Export the sim_single function to the worker processes clusterExport(cl, "sim_single") # Load the fmsb library in the worker processes clusterEvalQ(cl, library(fmsb)) cat("Searching for a result that meets the criteria
...\n") while (TRUE) { # Run simulations in parallel results <- parLapply(cl, 1:num_cores, function(i) { # Run as many simulations as cores sim_single() }) # Check the results for the desired condition for (res in results) { if (res$bonf > alpha && res$holm < alpha) { cat("Result meeting the criteria found:\n") return(res) } } # If no result found in this batch, continue the loop } } # Find the result using parallel processing until found res_parallel_loop <- find_result_parall
el_loop(alpha = alpha) # Output the result (will be printed within the loop when found) print(res_parallel_loop) http://rio2016.5ch.net/test/read.cgi/math/1723152147/818
819: 132人目の素数さん [sage] 2025/05/01(木) 19:36:50.50 ID:VtjTJL9d 3群以上の多群の比の比較検定で、ペアワイズでの有意差検定を行いボンフェローニ補正ではどのペアでも有意差なしだが、 ホルム補正では有意差がでるペアが存在するというデータを有意水準0.05として作成してください。 各群のサンプルサイズは不均等でもかまいません。 http://rio2016.5ch.net/test/read.cgi/math/1723152147/819
820: 132人目の素数さん [sage] 2025/05/01(木) 19:38:55.56 ID:twZ5uJsW 質問スレなので宿題依頼スレでやってください http://rio2016.5ch.net/test/read.cgi/math/1723152147/820
821: 132人目の素数さん [sage] 2025/05/01(木) 22:05:28.78 ID:xi2mMyC+ またゴミがなんかいってるよ。アホなサイト引っ張りだしてきてでたらめほざいて。正しいこと言ってるサイトと見分けつかんのかね?大学院とかついてたらそれだけで信じるカス。書いてる内容メタくそやん。 http://rio2016.5ch.net/test/read.cgi/math/1723152147/821
822: 132人目の素数さん [sage] 2025/05/02(金) 10:48:45.43 ID:+8QO9mMm set.seed(123) library(fmsb) alpha <- 0.05 N <- 1000 sim <- function() { # 群ごとのサンプルサイズ(不均等で可) n1 <- sample(250:350, 1) # 低用量 n2 <- sample(250:350, 1) # 高用量 n3 <- N - n1 - n2 # プラセボ # 各群の成功率設定(ペアでは差なし、プラセボ vs 合算で差あり) p1 <- 0.30 # 低用量 p2 <- 0.31 # 高用量 p3 <- 0.22 # プラセボ(低め) x1 <- rbinom(1, n1, p1) x2 <- rbinom(1, n2, p2)
x3 <- rbinom(1, n3, p3) m <- rbind(success = c(x1, x2, x3), failure = c(n1 - x1, n2 - x2, n3 - x3)) bonf_p <- suppressWarnings(min(as.vector(pairwise.fisher.test(m[1,], colSums(m), p.adj="bonf")$p.value), na.rm=TRUE)) combined <- matrix(c(x1 + x2, n1 + n2 - x1 - x2, x3, n3 - x3), nrow=2) comb_p <- fisher.test(combined)$p.value list(m = m, bonf = bonf_p, combined_p = comb_p, sizes = c(n1, n2, n3)) } # 条件を満たすデータが出るまでループ res <- sim() while (!(res$bonf > alph
a && res$combined_p < alpha)) { res <- sim() } # 結果出力 res$m cat("Bonferroni-adjusted pairwise p-value (min):", res$bonf, "\n") cat("Combined treatment vs placebo p-value:", res$combined_p, "\n") cat("Sample sizes:", res$sizes, "\n") http://rio2016.5ch.net/test/read.cgi/math/1723152147/822
823: 132人目の素数さん [sage] 2025/05/02(金) 11:02:18.75 ID:+8QO9mMm set.seed(123) library(fmsb) alpha <- 0.05 N <- 1000 # Simulation function sim <- function() { # Random group sizes n1 <- sample(250:350, 1) # Low-dose n2 <- sample(250:350, 1) # High-dose n3 <- N - n1 - n2 # Placebo if (n3 < 100) return(NULL) # Skip too-small placebo # Success rates from uniform distribution p1 <- runif(1) p2 <- runif(1) p3 <- runif(1) # Binomial draws x1 <- rbi
nom(1, n1, p1) x2 <- rbinom(1, n2, p2) x3 <- rbinom(1, n3, p3) # 3-group matrix m3 <- rbind(success = c(x1, x2, x3), failure = c(n1 - x1, n2 - x2, n3 - x3)) colnames(m3) <- c("Low", "High", "Placebo") # Add 4th group = combined (low + high) x4 <- x1 + x2 n4 <- n1 + n2 m4 <- cbind(m3, Combined = c(x4, n4 - x4)) # Perform pairwise Fisher's exact tests across 4 groups pw <- suppressWarnings(pairwise.fisher.test(m4[1,], colSums(m4), p.adj="bonf"
)$p.value) pw_vals <- as.vector(pw) pw_vals <- pw_vals[!is.na(pw_vals)] names_all <- names(pw_vals) # Identify significant pairs sig_idx <- which(pw_vals < alpha) sig_names <- names(pw_vals)[sig_idx] # Check if only Combined vs Placebo is significant is_valid <- length(sig_idx) == 1 && any(grepl("Placebo-Combined|Combined-Placebo", sig_names)) if (is_valid) { return(list(m = m4, probs = c(p1, p2, p3), sizes = c(n1, n2, n3), pvals = pw)) } else { return(NULL) } } # Run unti
l condition met res <- NULL while (is.null(res)) { res <- sim() } # Output results print(res$m) cat("Success probabilities (Low, High, Placebo):", round(res$probs, 3), "\n") cat("Sample sizes (Low, High, Placebo):", res$sizes, "\n") cat("Pairwise Bonferroni-adjusted p-values:\n") print(res$pvals) http://rio2016.5ch.net/test/read.cgi/math/1723152147/823
824: 132人目の素数さん [sage] 2025/05/02(金) 22:44:37.00 ID:056ygUN9 EMPAREG試験の解析をベイズでやっていたら、低用量高用量を統合する必要もなかったはず。 頻度主義統計で有意差がでない237:253の範囲でもプラセボよりイベント発生を抑制することが示せる。 ############## 237:253 ################ # JAGSモデル文字列 model_string <- " model { for (i in 1:N) { s[i] ~ dbin(theta[i], n[i]) # n[i] を別に与える theta[i] ~ dbeta(1, 1) # 非情報的事前分布 } } " EMPA_REG=\(x,verbose=FALSE){ # データ da
ta_list <- list( s = c(288, x, 490-x), n = c(2333, 2345, 2342), N = 3 ) # 初期値 # init_fun <- function() list(theta = runif(3, 0.05, 0.15)) init_fun <- function() list(theta = runif(3)) # モデル構築・初期化・実行 library(rjags) model <- jags.model(textConnection(model_string), data = data_list, inits = init_fun, n.chains = 3, n.adapt = 1000) # バーンインとサンプル取得 update(model, 1000) samples <- coda.samples(model, variable.names = c("theta"), n.iter = 5000) ms=as.matrix(samp
les) Placebo=ms[,1] Low=ms[,2] High=ms[,3] if(verbose){ ylim=c(0,max(max(density(Placebo)$y), max(density(Low)$y), max(density(High)$y))) xu=max(max(density(Placebo)$x), max(density(Low)$x), max(density(High)$x)) xl=min(min(density(Placebo)$x), min(density(Low)$x), min(density(High)$x)) xlim=c(xl,xu) plot(density(Placebo),xlim=xlim,ylim=ylim,xlab=quote(theta),ylab='',main='',col=8) lines(density(Low),col='pink',lwd=2) lines(density(High),col='red',lwd=2) } c(Low_Effective=mean(Low<Placebo),High_Eff
ective=mean(High<Placebo) ) } ans=sapply(237:253,EMPA_REG) ans http://rio2016.5ch.net/test/read.cgi/math/1723152147/824
825: 132人目の素数さん [sage] 2025/05/04(日) 19:45:41.87 ID:Ie2Wyhjx CRANからパッケージBESTが消えていたのでplotPostと同等機能の関数を復刻(不適当データ入力などエラー回避処理は面倒なのでやってない)。 https://i.imgur.com/GYBGnJs.png This function helps visualize posterior distribution samples from Bayesian inference and displays various informative elements. It can show the mean, mode, median, a credible interval (either HDI or quantiles), the Region of Practical Equivalence (ROPE), and a comparison va
lue. Arguments: posterior_samples: A numeric vector of posterior distribution samples. credMass: The width of the credible interval (a numeric value between 0 and 1; default is 0.95). ROPE: A numeric vector specifying the ROPE range (e.g., c(lower_bound, upper_bound)). compVal: A numeric value for comparison. showMean: A logical value indicating whether to display the mean (TRUE/FALSE). showMode: A logical value indicating whether to display the mode (TRUE/FALSE). showMedian: A logical value indicating wh
ether to display the median (TRUE/FALSE). showCurve: A logical value indicating whether to display a density plot (TRUE/FALSE; if FALSE, a histogram is shown). showQuantile: A logical value indicating whether to display the credible interval as quantiles instead of HDI (TRUE/FALSE). xlab: The label for the x-axis. main: The title of the plot. hist_color: The color of the histogram. textSize: The size of the text elements. yaxt: The y-axis style (default is to show only numbers). ...: Additional arguments to
be passed to the plotting function. http://rio2016.5ch.net/test/read.cgi/math/1723152147/825
826: 132人目の素数さん [sage] 2025/05/05(月) 19:07:36.02 ID:LDY1RQtT # This R function, riskratio.boot, calculates the risk ratio and its Highest Density Interval (HDI) # using a bootstrap method. It takes the number of events and the total number of observations # for two groups as input. riskratio.boot <- function(r1, r2, n1, n2, nboot = 5000, conf.level = 0.95, verbose = FALSE){ # Combine the number of events and total observations for both groups. r <- c(r1, r2) n <- c(n1, n2) # Create a 2
x2 matrix representing the contingency table (number of events and non-events). m <- cbind(r, n - r) ; m # Perform bootstrap resampling to estimate the risk ratio distribution. rr <- replicate(nboot, { # Simulate the number of events in the first group by sampling with replacement # from a population with the observed event rate. sample(c(rep(1, r1), rep(0, n1 - r1)), n1, replace = TRUE) |> sum() -> R1 # Simulate the number of events in the second group similarly. sample(c(rep(1, r2), rep(0, n2
- r2)), n2, replace = TRUE) |> sum() -> R2 # Calculate the risk ratio from the bootstrapped event counts. (R1 / n1) / (R2 / n2) }) # If verbose is TRUE, plot the posterior distribution of the risk ratio. if(verbose){ source('plotpost.R') # Assuming 'plotpost.R' is a script for plotting posterior distributions. plotpost(rr, compVal = 1, showCurve = 1, lwd = 4) # Plot the distribution with a comparison value of 1. } # Calculate the mean of the bootstrapped risk ratios. b_mean <- mean(rr) # Calculate
the Highest Density Interval (HDI) of the bootstrapped risk ratios. b_ci <- HDInterval::hdi(rr, credMass = conf.level) # Return a list containing the bootstrap mean and the HDI. list(b_mean = b_mean, b_ci = b_ci) } # Example usage of the riskratio.boot function. riskratio.boot(244, 282, 2345, 2333) http://rio2016.5ch.net/test/read.cgi/math/1723152147/826
827: 132人目の素数さん [sage] 2025/05/05(月) 19:08:42.51 ID:LDY1RQtT Description: The `riskratio.boot` function in R estimates the risk ratio between two groups and provides its Highest Density Interval (HDI) using a bootstrap resampling approach. It takes the counts of events and the total number of observations for each of the two groups as input. Usage: riskratio.boot(r1, r2, n1, n2, nboot = 5000, conf.level = 0.95, verbose = FALSE) Arguments: * `r1`: The number of events in the first group. * `
r2`: The number of events in the second group. * `n1`: The total number of observations in the first group. * `n2`: The total number of observations in the second group. * `nboot`: The number of bootstrap replicates to perform (default is 5000). A larger number of replicates generally provides more stable estimates. * `conf.level`: The credibility level for the Highest Density Interval (HDI) (default is 0.95, corresponding to a 95% HDI). * `verbose`: A logical value indicating whether to display a plot of t
he bootstrap distribution of the risk ratio (default is `FALSE`). If `TRUE`, it assumes a script named `plotpost.R` is available in the working directory for plotting. http://rio2016.5ch.net/test/read.cgi/math/1723152147/827
828: 132人目の素数さん [sage] 2025/05/05(月) 19:09:02.55 ID:LDY1RQtT Details: The function works by simulating the event outcomes in each group through bootstrap resampling. For each group, it draws `n1` (or `n2`) samples with replacement from a hypothetical population that has the observed proportion of events (`r1/n1` or `r2/n2`). The number of events in each resampled set (`R1` and `R2`) is then used to calculate a bootstrapped risk ratio `(R1/n1) / (R2/n2)`. This process is repeated `nboot` times t
o generate a distribution of risk ratios. The function then calculates the mean of this distribution and its Highest Density Interval (HDI), which represents the most credible range for the true risk ratio given the data and the bootstrap procedure. If `verbose` is set to `TRUE`, the function attempts to plot the distribution of the bootstrapped risk ratios using a script named `plotpost.R`. This requires that the `plotpost.R` script exists in the current working directory and is capable of handling the ve
ctor of bootstrapped risk ratios. Value: The function returns a list with the following components: * `b_mean`: The mean of the bootstrapped risk ratios. * `b_ci`: The Highest Density Interval (HDI) of the bootstrapped risk ratios, as a vector with the lower and upper bounds. Note: The `verbose = TRUE` option depends on an external script `plotpost.R`. If you intend to use this option, ensure that the script is available and correctly implemented for plotting posterior-like distributions. The HDI is ca
lculated using the `hdi` function from the `HDInterval` package, so this package must be installed (`install.packages("HDInterval")`) and loaded (`library(HDInterval)`) if you intend to use the default behavior. http://rio2016.5ch.net/test/read.cgi/math/1723152147/828
829: 132人目の素数さん [sage] 2025/05/05(月) 19:23:25.28 ID:LDY1RQtT > riskratio.boot(244,282,2345,2333,nboot=10000) $b_mean [1] 0.8641319 $b_ci lower upper 0.7312212 1.0106121 attr(,"credMass") [1] 0.95 http://rio2016.5ch.net/test/read.cgi/math/1723152147/829
830: 132人目の素数さん [sage] 2025/05/09(金) 06:19:21.94 ID:vIVXuysf SequentialPrimeDigits[n_Integer] := Module[{p = 10, result = {}}, While[Length[result] < n, If[PrimeQ[p] && AllDifferencesAreOneQ[p], AppendTo[result, p] ]; p++ ]; result ] AllDifferencesAreOneQ[num_Integer] := Differences[IntegerDigits[num]] === ConstantArray[1, Length[IntegerDigits[num]] - 1] SequentialPrimeDigits[5] {23, 67, 89, 4567, 23456789} SequentialPrimeDigits[20] http://rio2016.5ch.net/test/read.cgi/math/1723152147/83
0
831: 132人目の素数さん [sage] 2025/05/10(土) 11:02:46.60 ID:ynDPH7B8 # ------------------------------------------------------------------------------ # ファイル名:logistic_regression_uraguchi_factors.R # 目的:裏口入学の決定要因を評価するロジスティック回帰分析 # 考察対象の説明変数:学力、大学ランク (基準カテゴリ: A)、縁故、親の所得、寄付金 # ------------------------------------------------------------------------------ # データ生成 (大学ランクを因子型、基準レベル A) set.seed(123) n <- 1000 ranks_ch
ar <- sample(LETTERS[1:6], n, replace = TRUE) ranks_factor <- factor(ranks_char, levels = LETTERS[1:6], ordered = TRUE) # 順序付きファクターとして生成 data <- data.frame( 裏口入学 = rbinom(n, 1, 0.2), 学力 = rnorm(n, mean = 50, sd = 10), 大学ランク = ranks_factor, 縁故 = rbinom(n, 1, 0.1), 親の所得 = rlnorm(n, meanlog = log(5000), sdlog = 0.3), 寄付金 = rlnorm(n, meanlog = log(100000), sdlog = 1.0) ) # 支払額の生成 rank_numeric <- as.numeric(data$大学ランク) # A=1, B=2, ... 6 data$支払額 <-
10 * pmax( 300 + 100 * data$裏口入学 + 0.5 * (100 - data$学力) + 20 * rank_numeric + 50 * data$縁故 + 0.05 * data$親の所得 + 0.001 * data$寄付金 + rnorm(n, mean = 0, sd = 50), 0 ) # ロジスティック回帰モデル (大学ランクが因子型として扱われ、基準カテゴリは A) model_full <- glm(裏口入学 ~ 学力 + 大学ランク + 縁故 + 親の所得 + 寄付金, data = data, family = binomial) # オッズ比と信頼区間の算出 confint_vals <- exp(confint(model_full)) odds_ratios_ci <- data.frame( Variable = rownames(confint_vals)[-1]
, CI_lower = confint_vals[-1, 1], CI_upper = confint_vals[-1, 2] ) # プロット用の変数ラベルを日本語化 label_map <- c( "学力" = "学力", "大学ランク.L" = "大学ランク B", "大学ランク.Q" = "大学ランク C", "大学ランク.C" = "大学ランク D", "大学ランク^4" = "大学ランク E", "大学ランク^5" = "大学ランク F", "縁故" = "縁故", "親の所得" = "親の所得", "
寄付金" = "寄付金" ) odds_ratios_ci$日本語変数名 <- ifelse(odds_ratios_ci$Variable %in% names(label_map), label_map[odds_ratios_ci$Variable], odds_ratios_ci$Variable) http://rio2016.5ch.net/test/read.cgi/math/1723152147/831
832: 132人目の素数さん [sage] 2025/05/10(土) 11:02:56.88 ID:ynDPH7B8 # 現在の par() の設定を保存 current_par <- par(no.readonly = TRUE) # 指定された mar と bty で描画 par(mar = c(5, 8, 5, 2), bty = 'l') # plot 関数を使用したオッズ比の信頼区間プロット (1を基準) n_vars <- nrow(odds_ratios_ci) y_positions <- n_vars:1 xlim_odds <- range(odds_ratios_ci$CI_lower, odds_ratios_ci$CI_upper) plot(NA, xlim = xlim_odds, ylim = c(0.5, n_vars + 0.5), xlab = "オッズ比 (log scale)", ylab = &quo
t;", main = "ロジスティック回帰分析:オッズ比の95%信頼区間", log = "x", yaxt = "n") segments(x0 = odds_ratios_ci$CI_lower, x1 = odds_ratios_ci$CI_upper, y0 = y_positions, y1 = y_positions, col = "skyblue", lwd = 4) abline(v = 1, lty = "dashed", col = "black") # y軸のラベルを日本語で追加 axis(side = 2, at = y_positions, labels = odds_ratios_ci$日本語変数名[order(y_positions, decreasing = TRUE)], las = 1) # 描画後に元の par() の設定に戻す pa
r(current_par) http://rio2016.5ch.net/test/read.cgi/math/1723152147/832
833: 132人目の素数さん [sage] 2025/05/11(日) 21:13:09.76 ID:2CgV4g4d # データ設定 n_placebo <- 1000; eff_placebo <- 24 n_old <- 1000; eff_old <- 40 n_new <- 1000; eff_new <- 25 # 有効率 p_placebo <- eff_placebo / n_placebo p_old <- eff_old / n_old p_new <- eff_new / n_new # 比較:旧薬 vs 偽薬(有意差) m1 <- matrix(c(eff_old, n_old - eff_old, eff_placebo, n_placebo - eff_placebo), nrow = 2) test1 <- prop.test(m1, correct = FALSE) # 比較
:新薬 vs 偽薬(有意差なし) m2 <- matrix(c(eff_new, n_new - eff_new, eff_placebo, n_placebo - eff_placebo), nrow = 2) test2 <- prop.test(m2, correct = FALSE) # 比較:旧薬 vs 新薬(非劣性検定) # 非劣性マージン M <- -0.10 # 差(新薬 - 旧薬) diff <- p_new - p_old # 標準誤差(差の95%信頼区間に使用) se <- sqrt(p_new*(1 - p_new)/n_new + p_old*(1 - p_old)/n_old) z <- qnorm(0.025, lower.tail = FALSE) lower_CI <- diff - z * se # 非劣性判定 non_inferior <- lower_CI > M # 結果表示 cat
("=== 旧薬 vs 偽薬 ===\n") print(test1) cat("\n=== 新薬 vs 偽薬 ===\n") print(test2) cat("\n=== 非劣性検定(旧薬 vs 新薬) ===\n") cat(sprintf("差(新薬 - 旧薬) = %.3f\n", diff)) cat(sprintf("95%% CI = [%.3f, %.3f]\n", diff - z*se, diff + z*se)) cat(sprintf("非劣性マージン = %.3f\n", M)) cat(sprintf("非劣性判定: %s\n", ifelse(non_inferior, "非劣性あり", "非劣性なし"))) http://rio2016.5ch.net/test/read.cgi/math/172315
2147/833
834: 132人目の素数さん [sage] 2025/05/13(火) 13:21:07.90 ID:L+Wotuil ド底辺シリツ医大の三法則を与えたらAIが12法則まで拡張してくれました。 ド底辺医大の十二箇条 (The Twelve Laws of Do-Teihen Medical School) 第1法則 ド底辺シリツ医大が悪いのではない、本人の頭が悪いんだ。 It is not the bottom medical school but its enrollee that is despicable, which deserves to be called a bona fide moron beyond redemption. 第2法則 ド底辺シリツ医大卒は恥ずかしくて、学校名を皆さま言いません。 The graduates of Do-Teihen
are so ashamed that none of them dare to mention their own alma mater they had gone through. 第3法則 ド底辺特殊シリツ医大卒は裏口入学の負い目から裏口馬鹿を暴く人間を偽医者扱いしたがる。 The Do-Teihen graduates are so ashamed of having bought their way into the exclusively bottom-leveled medical school that they tend to call a genuine doctor a charlatan who elucidates their imbecility. 第4法則 ド底辺医大卒は、偏差値や出身校を気にするなと言いながら、自分の子どもには絶対にそんな大学へは行かせたくないと思って
いる。 While claiming that academic ranking or alma mater does not matter, a Do-Teihen graduate would never let their own child attend such a university. 第5法則 ド底辺医大卒は、裏口入学を否定しない。否定できない。なぜなら、実際に自分がそうだからである。 A Do-Teihen graduate never denies the existence of backdoor admissions—because deep down, they know they were one of them. 第6法則 ド底辺医大は、「差別するな」と叫びながら、偏差値・財力・コネがない者を最もあからさまに差別する。 While crying out agai
nst discrimination, Do-Teihen medical schools are the very institutions that blatantly discriminate against those without test scores, wealth, or connections. http://rio2016.5ch.net/test/read.cgi/math/1723152147/834
835: 132人目の素数さん [sage] 2025/05/13(火) 13:21:14.83 ID:L+Wotuil 第7法則 ド底辺医大卒は「実力で入った」と言うが、その“実力”の定義を決して口にしない。 A Do-Teihen graduate may claim, “I got in on merit,” but they will never define what that 'merit' actually was. 第8法則 ド底辺医大卒の最大の敵は、同級生ではなく、偏差値という現実である。 The greatest enemy of a Do-Teihen graduate is not their classmates—but the cold, numerical reality of standardized test scores. 第9法則 ド底辺医大では、人
格者は浮く。媚びる者とカネ持ちが残る。 In Do-Teihen med schools, the virtuous are outcasts; only flatterers and the wealthy thrive. 第10法則 ド底辺医大に入る者は、嘘をついて入学し、嘘をつき続けて卒業する。 Those who enter Do-Teihen medical schools do so with lies—and graduate by continuing to lie. 第11法則 ド底辺医大卒は、知性を持つ批判者を最も憎む。それは、自分が決してなれない姿だからだ。 Graduates of Do-Teihen reserve their deepest hatred for intelligent critics—because those critics reflect
everything they can never become. 第12法則 ド底辺医大は、医者を育てる場ではない。医師免許を与える「通行証発行所」である。 Do-Teihen is not a school to train doctors. It is a toll booth that issues medical licenses for a price. http://rio2016.5ch.net/test/read.cgi/math/1723152147/835
836: 132人目の素数さん [sage] 2025/05/15(木) 15:07:47.91 ID:vFdoSXtm rm(list=ls()) library(PropCIs) noninferior.pitfall <- function(r0,n0, r1,n1, r2,n2, r3,n3, nim_coef, alpha=0.05, yates=FALSE) { delta <- (r0/n0 - r1/n1) * nim_coef if (min(r0, r1, r2, r3) < 5) { p1 <- fisher.test(matrix(c(r1, n1-r1, r0, n0-r0), 2, 2))$p.value p2 <- fisher.test(matrix(c(r2, n2-r2, r0, n0-r0), 2, 2))$p.value ci_upper <- diffscoreci(r2, n2, r3, n3, conf.level=1-2*alpha)$conf.int[2] } else { p1 <- pro
p.test(c(r1, r0), c(n1, n0), correct=yates)$p.value p2 <- prop.test(c(r2, r0), c(n1, n0), correct=yates)$p.value ci_upper <- prop.test(c(r2, r1), c(n2, n1), conf.level=1-2*alpha, correct=yates)$conf.int[2] } all( r1 < r0 && p1 < alpha, p2 > alpha, ci_upper < delta ) } noninferior.pitfall(16,201,6,202,7,203,6,204,0.684) http://rio2016.5ch.net/test/read.cgi/math/1723152147/836
837: 132人目の素数さん [sage] 2025/05/16(金) 16:37:03.25 ID:s89ybxV8 イベント発生が人数比で 臨床試験1で 旧薬 vs プラセボで 5/201 vs 19/202 臨床試験2で 新薬 vs 旧薬 で 9/203 vs 5/204 であったとき (1) 新薬がプラセボより劣る確率を計算せよ。 (2) 新薬はプラセボより有意差をもって有効といえるか? 計算に必要な条件は適宜設定してよい。 例:イベント発生は独立事象である library(rjags) library(coda) worth_than_placebo <- function(r0, n0, r1, n1, r2, n2, r3, n3){ model_string <- ' model { # 試験 (旧
薬 vs 偽薬) r1 ~ dbin(p1, n1) p1 ~ dbeta(1, 1) r0 ~ dbin(p0, n0) p0 ~ dbeta(1, 1) # 試験 (新薬 vs 旧薬) r2 ~ dbin(p2, n2) p2 ~ dbeta(1, 1) r3 ~ dbin(p3, n3) p3 ~ dbeta(1, 1) # parameters p2_est <- p2 p0_est <- p0 p2_worse_than_p0 <- step(p2_est - p0_est) } ' data <- list(r1=r1 , n1=n1 , r0=r0 ,n0=n0, r2=r2 , n2=n2 , r3=r3 , n3=n3) jags_model <- jags.model(file=textConnection(model_string), data=data, n.chains=3, n.adapt=3000, quiet = TRUE) update(jags_model, n.iter=2000) jags_samples <-
coda.samples(jags_model, variable.names=c("p2_est", "p0_est", "p2_worse_than_p0"), n.iter=10000, thin=1) summary(jags_samples) js <- as.data.frame(as.matrix(jags_samples)) names(js) source("plotpost.R") layout(matrix(c(1,2,3,3), 2, 2, byrow=TRUE)) plotpost(js$p2_est, col='lightcoral',xlab="新薬",cex.lab=1.5,main="") plotpost(js$p0_est, col='lightgray', xlab="プラセボ",cex.lab=1.5,main="") plotpost(js$p0_est - js$p2_est, comp
Val = 0, col=c('lightcoral', 'lightgray'), xlab="プラセボ - 新薬", main="",cex.lab=1.5) HDInterval::hdi(js$p0_est - js$p2_est) |> print() mean(js$p2_worse_than_p0) } result <- worth_than_placebo(r0=19, n0=202, r1=5, n1=201, r2=9, n2=203, r3=5, n3=204) print(paste("新薬がプラセボより劣る確率:", result)) http://rio2016.5ch.net/test/read.cgi/math/1723152147/837
838: 132人目の素数さん [sage] 2025/05/17(土) 02:21:13.42 ID:zAzyVzie >>837 # --- 必要パッケージ --- library(rjags) library(coda) library(HDInterval) # --- データ定義 --- data_list <- list( r0 = 19, n0 = 202, # プラセボ r1 = 5, n1 = 201, # 旧薬(試験1) r2 = 9, n2 = 203, # 新薬 r3 = 5, n3 = 204 # 旧薬(試験2) ) # --- 階層モデル定義 --- model_hier <- " model { r0 ~ dbin(p0, n0) r1 ~ dbin(p1, n1) r2 ~ dbin(p2, n2) r3 ~ dbin(p3, n3) p0 ~ dbeta(1, 1) p2 ~ dbeta(1, 1) mu_old ~ db
eta(1, 1) tau ~ dgamma(0.001, 0.001) # 弱情報事前分布 p1 ~ dbeta(mu_old * tau, (1 - mu_old) * tau) p3 ~ dbeta(mu_old * tau, (1 - mu_old) * tau) p2_worse_than_p0 <- step(p2 - p0) rd_p0_p1 <- p0 - p1 rd_p1_p2 <- p1 - p2 rd_p0_p3 <- p0 - p3 } " jags_model <- jags.model(textConnection(model_hier), data = data_list, n.chains = 2, quiet=TRUE) update(jags_model, 3000, progress.bar="none") jags_samples <- coda.samples(jags_model, c("p0","p1","p2",&
quot;p3", "p2_worse_than_p0", "rd_p0_p1","rd_p1_p2", "rd_p0_p3"), n.iter=10000, progress.bar="none") gelman.plot(jags_samples) plot(jags_samples) js <- as.data.frame(as.matrix(jags_samples)) mean(js$p2_worse_than_p0) hdi(js$rd_p0_p1) # 旧薬(試験1) vs プラセボ hdi(js$rd_p1_p2) # 旧薬(試験2) vs 新薬 hdi(js$rd_p0_p3) # 旧薬(試験2) vs プラセボ 仮想 hdi(js$p0-js$p2) # 新薬 vs プラセボ 仮想 source("plotpost.R") layout(matrix(c(1,2,
3,3), 2, 2, byrow=TRUE)) plotpost(js$p2, col='lightcoral',xlab="新薬",cex.lab=1.5,main="") plotpost(js$p0, col='lightgreen', xlab="プラセボ",cex.lab=1.5,main="") plotpost(js$p0 - js$p2, compVal = 0, col=c('lightcoral', 'lightgreen'), xlab="プラセボ - 新薬", cex.main=2,main="二項分布階層モデル",cex.lab=1.5, breaks="scott") http://rio2016.5ch.net/test/read.cgi/math/1723152147/838
839: 132人目の素数さん [sage] 2025/05/17(土) 07:49:04.94 ID:Vpav5/5q library(meta) library(gemtc) # データフレームの作成 data <- data.frame( study = factor(c(1, 1, 2, 2)), treatment = factor(c("Placebo", "Old", "Old", "New")), events = c(19, 5, 5, 9), total = c(202, 201, 204, 203) ) # 治療効果の比較リスト comparisons <- c("New - Placebo") # ネットワークオブジェクトの作成 network <- mtc.network(data = data) # ランダム効果モデルによるメタアナ
リシス model <- mtc.model(network, type = "consistency", likelihood = "binom", link = "logit", linearModel = "random") # モデルの実行 results <- mtc.run(model) # 結果の概要 summary(results) # 新薬 vs プラセボ の効果量(オッズ比)と95%信頼区間 relative.effect(results, "New", "Placebo") # 新薬がプラセボより劣る確率(P(OR > 1))を推定 prob.superiority <- 1 - pnorm(0, mean = results$sol[grep("d.New", names(results$sol))], sd =
results$se[grep("d.New", names(results$se))]) cat("新薬がプラセボより劣る確率 (間接推定):", prob.superiority, "\n") # 新薬がプラセボに対して有意に有効かどうかの評価 # オッズ比の95%信頼区間に1が含まれるかどうかで判断 or.new.vs.placebo <- exp(results$sol[grep("d.New", names(results$sol))]) ci.lower <- exp(results$sol[grep("d.New", names(results$sol))] - 1.96 * results$se[grep("d.New", names(results$se))]) ci.upper <- exp(results$sol[grep(&
quot;d.New", names(results$sol))] + 1.96 * results$se[grep("d.New", names(results$se))]) cat("新薬 vs プラセボ オッズ比 (間接推定):", or.new.vs.placebo, "\n") cat("95%信頼区間:", ci.lower, "-", ci.upper, "\n") if (ci.upper < 1) { cat("新薬はプラセボに対して有意に有効である可能性が高いです。\n") } else if (ci.lower > 1) { cat("新薬はプラセボに対して有意に劣る可能性があります。\n") } else { cat("新薬とプラセボの
間に有意な差は見られない可能性があります。\n") } http://rio2016.5ch.net/test/read.cgi/math/1723152147/839
840: 132人目の素数さん [sage] 2025/05/20(火) 23:31:13.05 ID:gwaBTE4C library(R2jags) # データ data <- list( nA1 = 100, rA1 = 80, # Study1: 治療A nB1 = 100, rB1 = 40, # Study1: 治療B nA2 = 100, rA2 = 10, # Study2: 治療A nC2 = 100, rC2 = 5 # Study2: 治療C ) # JAGSモデル(textConnection使用) model_code <- " model { # 尤度関数 rA1 ~ dbin(pA1, nA1) rB1 ~ dbin(pB1, nB1) rA2 ~ dbin(pA2, nA2) rC2 ~ dbin(pC2, nC2) # 治療Aの階層モデル mu_A ~ dbeta(1, 1) tau_A ~ dgamma(0.001, 0.001) pA1 ~ dbet
a(mu_A * tau_A, (1 - mu_A) * tau_A) pA2 ~ dbeta(mu_A * tau_A, (1 - mu_A) * tau_A) sigma_A <- 1 / sqrt(tau_A) # SDに変換 # 治療BとCも階層化(平均リスクを別々に推定) mu_B ~ dbeta(1, 1) mu_C ~ dbeta(1, 1) pB1 ~ dbeta(mu_B * 100, (1 - mu_B) * 100) # 高い精度を仮定 pC2 ~ dbeta(mu_C * 100, (1 - mu_C) * 100) # リスク差 RD_A1_B1 <- pA1 - pB1 RD_A2_C2 <- pA2 - pC2 RD_B1_C2 <- pB1 - pC2 # B vs Cの直接比較 } " # JAGS実行 jags_model <- (textConnection(model_code), data = data, n.chains = 3,
quiet=TRUE) update(jags_model, 3000) #, progress.bar="none") jags_samples <- coda.samples(jags_model, c("mu_A", "sigma_A", "RD_A1_B1", "RD_A2_C2", "RD_B1_C2", "pA1", "pA2", "pB1", "pC2"), n.iter=10000) # , progress.bar="none") gelman.plot(jags_samples) plot(jags_samples) summary(jags_samples) jags_samples |> as.matrix() |> as.data.frame() -> js names(js) http://rio2016.5ch.net/test/re
ad.cgi/math/1723152147/840
841: 132人目の素数さん [sage] 2025/05/23(金) 10:58:19.14 ID:or+7Cxzr # " Construct a Monte Carlo study that investigates how the probability of coverage depends on the sample size and true proportion value. In the study, let n be 10, 25, 50, and 100 and let p be .05, .25, and .50. Write an R function that has three inputs, n, p, and the number of Monte Carlo simulations m,and will output the estimate of the exact coverage probability. Implement your function using each combination of n and p and m
= 1000 simulations. Describe how the actual probability of coverage of the traditional interval depends on the sample size and true proportion value. " f = \(n,p,m=1000){ y=rbinom(m,n,p) phat=y/n se=sqrt(phat*(1-phat)/n) lo=phat - qnorm(0.975)*se up=phat + qnorm(0.975)*se mean(lo < p & p < up) } f=Vectorize(f) n_values = c(10, 25, 50,100) p_values = c(0.05, 0.25, 0.5) set.seed(123) outer(n_values,p_values,f) http://rio2016.5ch.net/test/read.cgi/math/1723152147/841
842: 132人目の素数さん [sage] 2025/05/24(土) 02:35:14.53 ID:VetM3rz7 LearnBayes::beta.selectをoptimを使って算出 beta.optim <- function(x1, p1, x2, p2, verbose = TRUE) { # ------------------------- # モーメント近似による初期値推定 # ------------------------- mu0 <- (x1 + x2) / 2 # 仮の平均 sigma2 <- ((x2 - x1) / 4)^2 # 仮の分散(中間50%幅から) v <- mu0 * (1 - mu0) / sigma2 - 1 a0 <- mu0 * v b0 <- (1 - mu0) * v start <- c(a0, b0) # ----------------
--------- # 最適化対象の誤差関数定義 # ------------------------- objective <- function(params) { a <- params[1] b <- params[2] # pbeta による累積確率との差を2乗誤差で評価 err1 <- pbeta(x1, a, b) - p1 err2 <- pbeta(x2, a, b) - p2 return(err1^2 + err2^2) } # ------------------------- # 最適化(境界付き) # ------------------------- result <- optim( par = start, fn = objective, method = "L-BFGS-B", lower = c(1e-4, 1e-4) # ベータ分布は a, b > 0 ) # ------------------------- #
結果の取り出し # ------------------------- a_hat <- result$par[1] b_hat <- result$par[2] if (verbose) { cat("推定されたパラメータ: a =", round(a_hat, 4), ", b =", round(b_hat, 4), "\n") cat("pbeta(x1) =", round(pbeta(x1, a_hat, b_hat), 4), "(目標:", p1, ")\n") cat("pbeta(x2) =", round(pbeta(x2, a_hat, b_hat), 4), "(目標:", p2, ")\n") } # ------------------------- # 結果を返す # ------------------------- ret
urn(list(a = a_hat, b = b_hat, ss_value = result$value)) } http://rio2016.5ch.net/test/read.cgi/math/1723152147/842
843: 132人目の素数さん [sage] 2025/05/24(土) 08:17:52.86 ID:VetM3rz7 library(rjags) # Fit a Bayesian logistic regression model using JAGS and return predictions and posterior summaries fit_bayesian_logistic_jags <- function(data, formula, newdata, n.chains = 3, n.iter = 5000, n.burnin = 1000) { # Extract response variable name from the formula response_var <- all.vars(formula)[1] y <- data[[response_var]] # Convert factor response to binary numeric (0/1) if (is.factor(y)) y <- as.numeric(y
) - 1 y <- as.numeric(y) # Construct design matrices for training and new data X <- model.matrix(formula, data) new_X <- model.matrix(delete.response(terms(formula)), newdata) # Prepare data list for JAGS jags_data <- list( y = y, X = X, n = nrow(X), p = ncol(X), new_X = new_X, scale_beta = rep(2.5, ncol(X)) # Prior scale for each coefficient ) # Define the JAGS model model_string <- " model { for (j in 1:p) { beta[j] ~ dt(0, 1 / pow(scale_beta[j], 2), 1) } for (i in 1:n) { logit_p[
i] <- inprod(X[i,], beta[]) y[i] ~ dbern(1 / (1 + exp(-logit_p[i]))) } new_logit <- inprod(new_X[1,], beta[]) new_p <- 1 / (1 + exp(-new_logit)) } " # Initialize and run the JAGS model model <- jags.model(textConnection(model_string), data = jags_data, n.chains = n.chains, quiet = TRUE) update(model, n.burnin) # Draw posterior samples samples <- coda.samples(model, c("beta", "new_p"), n.iter - n.burnin) mat <- as.matrix(samples) # Return results list( model = s
amples, predicted_prob = mean(mat[, "new_p"]), summary = summary(samples) ) } http://rio2016.5ch.net/test/read.cgi/math/1723152147/843
844: 132人目の素数さん [sage] 2025/05/24(土) 08:18:19.92 ID:VetM3rz7 # Example data data <- data.frame( donation = c(0, 1000, 2000, 0, 3000, 0, 4000, 0, 5000, 0), score = c(90, 40, 35, 88, 30, 85, 25, 92, 20, 89), parent = c(0, 1, 1, 0, 1, 0, 1, 0, 1, 0), admission = as.factor(c(0, 1, 1, 0, 1, 0, 1, 0, 1, 0)) ) # New observation to predict newdata <- data.frame( donation = 2500, score = 40, parent = 1 ) # Fit model and obtain results set.seed(123) result <- fit_bayesian_logistic_jags( data =
data, formula = admission ~ donation + score + parent, newdata = newdata ) # Extract variable names including intercept var_names <- colnames(model.matrix(admission ~ donation + score + parent, data)) # Extract beta coefficient summaries beta_stats <- result$summary$statistics[grep("^beta\\[", rownames(result$summary$statistics)), c("Mean", "SD")] beta_quants <- result$summary$quantiles[grep("^beta\\[", rownames(result$summary$quantiles)), c("2.5%"
;, "97.5%")] # Rename row names using variable names rownames(beta_stats) <- var_names rownames(beta_quants) <- var_names # Display results print(beta_stats) print(beta_quants) cat("Predicted probability:", round(result$predicted_prob, 3), "\n") http://rio2016.5ch.net/test/read.cgi/math/1723152147/844
845: 132人目の素数さん [sage] 2025/05/24(土) 08:37:05.75 ID:VetM3rz7 library(rjags) # Fit a Bayesian logistic regression model using JAGS and return predictions and posterior summaries fit_bayesian_logistic_jags <- function(data, formula, newdata, n.chains = 3, n.iter = 5000, n.burnin = 1000) { # Extract response variable name from the formula response_var <- all.vars(formula)[1] y <- data[[response_var]] # Convert factor response to binary numeric (0/1) if (is.factor(y)) y <- as.numeric(y
) - 1 y <- as.numeric(y) # Construct design matrices for training and new data X <- model.matrix(formula, data) new_X <- model.matrix(delete.response(terms(formula)), newdata) # Prepare data list for JAGS jags_data <- list( y = y, X = X, n = nrow(X), p = ncol(X), new_X = new_X, scale_beta = rep(2.5, ncol(X)) # Prior scale for each coefficient ) # Define the JAGS model model_string <- " model { for (j in 1:p) { beta[j] ~ dt(0, 1 / pow(scale_beta[j], 2), 1) } for (i in 1:n) { logit_p[
i] <- inprod(X[i,], beta[]) y[i] ~ dbern(1 / (1 + exp(-logit_p[i]))) } new_logit <- inprod(new_X[1,], beta[]) new_p <- 1 / (1 + exp(-new_logit)) } " # Initialize and run the JAGS model model <- jags.model(textConnection(model_string), data = jags_data, n.chains = n.chains, quiet = TRUE) update(model, n.burnin) # Draw posterior samples samples <- coda.samples(model, c("beta", "new_p"), n.iter - n.burnin) mat <- as.matrix(samples) # Return results list( model = s
amples, predicted_prob = mean(mat[, "new_p"]), summary = summary(samples) ) } http://rio2016.5ch.net/test/read.cgi/math/1723152147/845
846: 132人目の素数さん [sage] 2025/05/24(土) 21:16:24.84 ID:VetM3rz7 # dbeta(L,a,b) == dbbeta(U,a,b) # Solve[L^(a-1)(1-L)^(b-1)==U^(a-1)(1-U)^(b-1), b] L=1/7 U=1/5 credMass = 0.95 f = function(a) 1 + ((a - 1) * log(U / L)) / log((1 - L) / (1 - U)) g = function(a) pbeta(U,a,f(a)) - pbeta(L,a,f(a)) - credMass (re=uniroot(g,c(1,1e5))) curve(g(x),1,150,bty="l") ; abline(h=0,lty=3) c(re$root,f(re$root)) http://rio2016.5ch.net/test/read.cgi/math/1723152147/846
847: 132人目の素数さん [sage] 2025/05/25(日) 04:22:37.55 ID:P4nhnL8B # dbeta(L,a,b) == dbbeta(U,a,b) # Solve[L^(a-1)(1-L)^(b-1)==U^(a-1)(1-U)^(b-1), b] L=1/7 U=1/5 credMass = 0.95 f = function(a) 1 + ((a - 1) * log(U / L)) / log((1 - L) / (1 - U)) g = function(a) pbeta(U,a,f(a)) - pbeta(L,a,f(a)) - credMass (re=uniroot(g,c(1,1e5))) a=re$root b=f(a) c(a,b) curve(g(x),1,1.5*a,bty="l") ; abline(h=0,lty=3) a/(a+b) # mean (a-1)/(a-1+b-1) # mode library(LearnBayes) beta.select(list(x=1/7,p=0.025),
list(x=1/5,p=0.975)) http://rio2016.5ch.net/test/read.cgi/math/1723152147/847
848: 132人目の素数さん [sage] 2025/05/25(日) 05:57:52.88 ID:P4nhnL8B >847のRのコードをChatGPTで Mathematicaにコメント付きで移植 (* betaParameter 関数: 指定された信頼区間 [L, U] に、指定された信頼度 credMass(例: 95%)の確率質量を持つ ベータ分布のパラメータ α, β を算出する。 *) betaParameter[L_: 1/7, U_: 1/5, credMass_: 0.95] := Module[ {α, β}, (* f[α] は、PDF[BetaDistribution[α, β], L] == PDF[BetaDistribution[α, β], U] を満たすように β を α に基づいて算出する関数。 *) f[α_] := 1 + ((α -
1) * Log[U / L]) / Log[(1 - L) / (1 - U)]; (* g[α] は、ベータ分布 Beta[α, f[α]] の区間 [L, U] に 含まれる確率(CDFの差)を返す関数。 *) g[α_] := CDF[BetaDistribution[α, f[α]], U] - CDF[BetaDistribution[α, f[α]], L]; (* g[α] = credMass を満たす α を数値的に求める *) α = α /. FindRoot[g[α] == credMass, {α, 1, 1*^5}]; (* 対応する β を算出 *) β = f[α]; (* 結果を返す *) {α, β} ] (* 関数を実行して α, β を取得 *) {α, β} = betaParameter[] (* g[α] を評価して、[L, U] に credMass の質量
があることを確認 *) g[α] http://rio2016.5ch.net/test/read.cgi/math/1723152147/848
849: 132人目の素数さん [sage] 2025/05/25(日) 06:42:27.34 ID:P4nhnL8B >>847 このプロトタイプをAIに与えて描画機能やコメントをつけてもらった。 beta.parameter <- function(lower, upper, credMass = 0.95, verbose = FALSE) { # Helper function to convert decimal numbers to fraction strings using MASS::fractions fractionStr <- function(x) { as.character(MASS::fractions(x)) } # Function to compute beta parameter (beta) based on alpha, # derived from the condition on the shape of the distribution bet
ween lower and upper f <- function(alpha) { 1 + ((alpha - 1) * log(upper / lower)) / log((1 - lower) / (1 - upper)) } # Root-finding function: difference between desired credible mass # and the Beta CDF probability between lower and upper with parameters (alpha, f(alpha)) g <- function(alpha) { pbeta(upper, alpha, f(alpha)) - pbeta(lower, alpha, f(alpha)) - credMass } # Find the root of g(alpha) = 0 over the interval [1, 1e5] # to find the alpha value that satisfies the credible mass condition re &l
t;- uniroot(g, c(1, 1e5)) alpha <- re$root beta <- f(alpha) # Calculate the mean of the Beta distribution mean <- alpha / (alpha + beta) # Calculate the mode if defined (alpha > 1 and beta > 1), # otherwise set mode to NA as it is undefined mode <- if (alpha > 1 && beta > 1) { (alpha - 1) / (alpha + beta - 2) } else { NA } http://rio2016.5ch.net/test/read.cgi/math/1723152147/849
850: 132人目の素数さん [sage] 2025/05/25(日) 06:42:37.76 ID:P4nhnL8B # If verbose flag is TRUE, plot the Beta distribution and annotate results if (verbose) { # Generate x values from 0 to 1 for plotting the density x <- seq(0, 1, length.out = 1000) # Compute Beta density values at x y <- dbeta(x, alpha, beta) # Color bars within the credible interval [lower, upper] as "lightcoral", # others as light gray ("gray70") col <- ifelse(x >= lower & x <= upper, "lightc
oral", "gray70") # Plot histogram-like vertical lines representing the Beta density plot(x, y, type = "h", col = col, lwd = 2, main = sprintf("Beta(%.2f, %.2f) Distribution\n[%s, %s] with %.0f%% Probability Mass", alpha, beta, fractionStr(lower), fractionStr(upper), credMass * 100), xlab = "x", ylab = "Density", bty = "n") # Add vertical dashed line for the mean, colored skyblue abline(v = mean, col = "skyblue", lwd = 1, lty = 2)
# Add vertical dashed line for the mode if defined, colored dark green if (!is.na(mode)) { abline(v = mode, col = "darkgreen", lwd = 1, lty = 2) } # Prepare legend labels for mean, mode (if exists), and credible interval labels <- c( paste0("Mean = ", round(mean, 3)), if (!is.na(mode)) paste0("Mode = ", round(mode, 3)) else NULL, paste0("95% Credible Interval [", fractionStr(lower), ", ", fractionStr(upper), "]") ) # Corresponding colors for l
egend items colors <- c( "skyblue", if (!is.na(mode)) "darkgreen" else NULL, "lightcoral" ) # Line types for legend items (dashed lines for mean and mode, none for credible interval) ltys <- c(2, if (!is.na(mode)) 2 else NULL, NA) # Plot characters for legend (none for lines, solid square for interval) pchs <- c(NA, if (!is.na(mode)) NA else NULL, 15) # Add legend at the top of the plot with no box, scaled text size legend("top", legend = labels, col = col
ors, bty = "n", cex = 0.9, lty = ltys, pch = pchs) } # Return a named vector of calculated parameters and statistics c(alpha = alpha, beta = beta, mean = mean, mode = mode) } http://rio2016.5ch.net/test/read.cgi/math/1723152147/850
851: 132人目の素数さん [sage] 2025/05/25(日) 06:51:16.85 ID:P4nhnL8B 今回スムーズに機能拡張できた理由は、以下のようにプロトタイプが非常に明快かつ健全だったことが大きな要因です。 ✅ プロトタイプの良さが拡張性を支えた理由 1. 構造がシンプル 中心となる数式(pbeta(U, a, b) - pbeta(L, a, b) = credMass)が明確で、ロジックが一貫していました。 f(a) で b を a の関数として定義しており、探索空間を1次元に抑えていた点も効率的。 2. 関数分離と再利用が可能 f(a) や g(a) が関数として定義されていたので、視覚化やバ
リエーションの追加が簡単でした。 3. 出力が明確 c(a, b) と返す構造が明瞭で、追加情報(期待値・最頻値など)も素直に足せました。 4. 数値的安定性が良好 uniroot() による根の探索は数値計算としても信頼性が高く、実用的。 http://rio2016.5ch.net/test/read.cgi/math/1723152147/851
852: 132人目の素数さん [sage] 2025/05/25(日) 10:38:10.59 ID:P4nhnL8B HDI_discrete <- function(prob_data, credMass) { x = prob_data[, 1] p = prob_data[, 2] n = length(x) sp = sort(p, index.return = TRUE) i = sp$ix[seq(n, 1, -1)] ps = p[i] xs = x[i] cp = cumsum(ps) ii = 1:n j = ii[cp >= credMass] j = j[1] achieved_credMass = cp[j] hdi_set = sort(xs[1:j]) v = list(credMass = achieved_credMass, set = hdi_set) return(v) } http://rio2016.5ch.net/test/read.cgi/math/1723152147/852
853: 132人目の素数さん [sage] 2025/05/29(木) 17:36:18.23 ID:WPcwJ6cn " サイコロを50回なげて4回1の目がでた。 1の目のでる確率は1/6である仮説が正しい確率を求めよ。 計算に必要なモデルは適宜設定してよい。 例:サイコロを投げる前のこの仮説が正しい確率は一様分布に従う。 a/(a+b)=p a=p(a+b) (1-p)a=pb b=a*(1-p)/p a=b*p/(1-p) a/b=p/(1-p) " rm(list=ls()) p=1/6 n=50 y=4 pbinom(y,n,p) dbinom(y,n,p) fn <- function(prob){ sub <-function(alpha) pbetat(p,prob,c(1,(1-p)/p)*alpha,c(y,n-y))$post optimiz
e(sub,c(1,100))$objective } fn = Vectorize(fn) fn(0.5) prior.probs=runif(1e6) post.probs=fn(prior.probs) summary(post.probs) HDInterval::hdi(post.probs) source("plotpost.R") plotpost(post.probs,col="lightcoral") lines(density(post.probs)) http://rio2016.5ch.net/test/read.cgi/math/1723152147/853
854: 132人目の素数さん [sage] 2025/05/30(金) 06:24:20.07 ID:zoAuXcvc par(bty="l") y = c(43, 24, 100, 35, 85) yn = max(y) n = length(y) B = 200 like = numeric(B) for(i in yn:B) { like[i] = 1 / (i^n) } pmf = like / sum(like) mean = sum((1:B) * pmf) plot(pmf, main="Posterior PMF of N", xlab="N", ylab="Probability", type="h", col="blue") plot(cumsum(pmf), main="Posterior CDF of N", xlab="N", ylab="Cumulative Probability&quo
t;, type="s", col="blue") abline(h=0.95, lty=3) c(lower=yn, mean=mean, upper=which(cumsum(pmf) > 0.95)[1]) sd = sqrt(sum(((1:B) - mean)^2 * pmf)) prob_N_gt_150 = sum(pmf[151:B]) cat("Posterior Mean:", mean, "\n") cat("Posterior Standard Deviation:", sd, "\n") cat("P(N > 150):", prob_N_gt_150, "\n") http://rio2016.5ch.net/test/read.cgi/math/1723152147/854
855: 132人目の素数さん [sage] 2025/05/30(金) 19:44:31.38 ID:zoAuXcvc (* pbetat関数の定義 *) pbetat[p0_, prob_, ab_, data_] := Module[{a, b, s, f, lbf, bf, post}, a = ab[[1]]; b = ab[[2]]; s = data[[1]]; f = data[[2]]; lbf = s * Log[p0] + f * Log[1 - p0] + Log@Beta[a, b] - Log@Beta[a + s, b + f]; bf = Exp[lbf]; post = prob * bf / (prob * bf + 1 - prob); <|"bf" -> bf, "post" -> post|> ] (* 関数fの定義 *) f[alpha_] := pbetat[1/6, 0.5, {alpha, 5*alpha - 4}, {4, 50 - 4}][&quo
t;post"] (* 最小化 *) result = NMinimize[{f[alpha], 0 <= alpha <= 20}, alpha] http://rio2016.5ch.net/test/read.cgi/math/1723152147/855
856: 132人目の素数さん [sage] 2025/05/30(金) 20:13:47.71 ID:zoAuXcvc p_post_null <- function(p0, prior, alpha, beta, success, failure){ # Calculate the total number of trials from successes and failures. total = success + failure # Calculate the likelihood of the data under the null hypothesis (H0). # This assumes a binomial distribution where the success probability is p0. m0 = dbinom(success, total, p0) # Calculate the marginal likelihood of the data under the alternative hypothesis (H1). # Under
H1, the success probability is assumed to follow a Beta distribution # with parameters alpha and beta. This function (from the 'extraDistr' package) # computes the marginal likelihood by integrating over all possible success probabilities. m1 = extraDistr::dbbinom(success, total, alpha, beta) # Calculate the Bayes Factor (BF01). # This is the ratio of the likelihood under the null hypothesis (m0) # to the marginal likelihood under the alternative hypothesis (m1). BF01 = m0 / m1 # Calculate the posterior
probability of the null hypothesis. # This updates the prior belief (prior) based on the evidence from the data (BF01). p_post = prior * BF01 / (prior * BF01 + 1 - prior) # Return both the Bayes Factor and the posterior probability as a named vector. c(BF01 = BF01, p_post = p_post) } # Optimize the 'alpha' parameter to maximize the posterior probability of the null hypothesis. # We're trying to find the 'alpha' value (within the range 0 to 1e6) that makes the # null hypothesis most plausible, given the da
ta and the relationship beta = 5*alpha - 4. # p_post_null(...)[2] specifically extracts the 'p_post' value from the function's output. optimize(function(alpha) p_post_null(p0 = 1/6, prior = 1/2, alpha, 5 * alpha - 4, # Beta parameter is a function of alpha success = 4, failure = 50 - 4)[2], c(0, 1e6)) # Search range for alpha # Calculate the Bayes Factor and posterior probability using specific alpha and beta values. # These specific values (alpha = 50/9, beta = 5*50/9 - 4) are likely the result of the # o
ptimization step above, or pre-determined values that are of interest. p_post_null(1/6, 1/2, 50/9, 5 * 50/9 - 4, 4, 50 - 4) http://rio2016.5ch.net/test/read.cgi/math/1723152147/856
857: 132人目の素数さん [sage] 2025/05/31(土) 05:25:02.62 ID:jzcOJBMt #' @title ベイズ事後確率計算関数 #' @description 帰無仮説と対立仮説の事後確率を計算 #' @param s 観測成功数(1の目が出た回数) #' @param n 総試行回数 #' @param p0 帰無仮説の確率(例: 1/6) #' @param prior 帰無仮説の事前確率(0~1) #' @param alpha 対立仮説のベータ分布αパラメータ #' @param beta 対立仮説のベータ分布βパラメータ #' @return list(bf01=ベイズファクター, post=事後確率, method=使用手法) calculate_posterior <- function
(s, n, p0, prior, alpha, beta) { # 入力検証 stopifnot( s >= 0, n > 0, p0 > 0 && p0 < 1, prior >= 0 && prior <= 1, alpha > 0, beta > 0 ) # 帰無仮説の尤度計算 m0 <- dbinom(s, n, p0) # 対立仮説の周辺尤度計算(extraDistr有無で自動切替) if (requireNamespace("extraDistr", quietly = TRUE)) { m1 <- extraDistr::dbbinom(s, n, alpha, beta) method <- "extraDistr::dbbinom()" } else { integrand <- function(p) dbinom(s, n, p) * dbeta(p, alpha, bet
a) m1 <- integrate(integrand, 0, 1)$value method <- "数値積分" } # ベイズファクターと事後確率計算(指定された式を使用) bf01 <- m0 / m1 post <- m0 * prior / (m0 * prior + (1 - prior) * m1) # 結果をリストで返す list( bf01 = bf01, post = post, method = method, inputs = list(s = s, n = n, p0 = p0, prior = prior, alpha = alpha, beta = beta), likelihoods = list(m0 = m0, m1 = m1) ) } http://rio2016.5ch.net/test/read.cgi/math/1723152147/857
858: 132人目の素数さん [sage] 2025/06/01(日) 07:35:56.27 ID:/PZFDI/g auc = integrate(unnorm_posterior,0,Inf)$value integrate(\(x) x*unnorm_posterior(x)/auc, 0,Inf) log_auc <- log(integrate(\(x) exp(log(unnorm_posterior(x))), 0, Inf)$value) integrate(\(x) x * exp(log(unnorm_posterior(x)) - log_auc), 0, Inf)$value http://rio2016.5ch.net/test/read.cgi/math/1723152147/858
859: 132人目の素数さん [sage] 2025/06/01(日) 09:23:27.85 ID:9o1m2vAK Calculate the marginal likelihood of the data under the alternative hypothesis (H1). wwwwwwwwwwwwwwwwwwwwwwwwwwwwww あほ~~ wwwwwwwwwwwwwwwwwwwwwwwwwwwwww http://rio2016.5ch.net/test/read.cgi/math/1723152147/859
860: 132人目の素数さん [sage] 2025/06/01(日) 10:41:31.84 ID:/PZFDI/g はい、Gelmanらの推奨は確かに進化しており、近年ではハーフt分布(half-t)がより一般的に推奨されています。この変化の背景と具体的な推奨内容を、理論的根拠と実践的なアドバイスに分けて解説します。 1. Gelmanの推奨の変遷 (1) 初期(2006年頃): ハーフコーシー(自由度ν=1)の推奨 推奨理由: 分散パラメータの弱情報事前分布として適切 裾が重く、極端な値への過剰なペナルティを回避 問題点: 自由度ν=1ではスケールの選択に敏感 小規模データで不安定にな
る可能性 (2) 近年(2013年以降): 自由度ν>1のハーフt分布へ移行 推奨理由: 自由度ν=3〜7でより安定した推定が可能 スケールパラメータの影響が緩和 代表的な論文: Gelman et al. (2013), "The Prior Can Often Only Be Understood in the Context of the Likelihood" http://rio2016.5ch.net/test/read.cgi/math/1723152147/860
861: 132人目の素数さん [sage] 2025/06/02(月) 11:14:31.22 ID:GMuHFUYr x = c(-0.86, -0.3, -0.05, 0.73) n = c(5, 5, 5, 5) y = c(0, 1, 3, 5) (data = cbind(x, n, y)) (response = cbind(y, n - y) ) results = glm(response ~ x, family = binomial) #summary(results) -results$coef[1]/results$coef[2] library(MASS) # mvrnorm を使うため # 推定された係数と共分散行列 beta_hat = coef(results) (vcov_matrix = vcov(results)) # 多変量正規乱数を生成(β0, β1) set.seed(123) # 再現性のため samples = mvrnorm(n = 10000, mu
= beta_hat, Sigma = vcov_matrix) # 各サンプルから LD50 を計算 LD50_samples = -samples[,1] / samples[,2] # 信頼区間(95%) CI = quantile(LD50_samples, probs = c(0.025, 0.975)) # 結果表示 cat("シミュレーションによるLD50の95%信頼区間:\n") print(CI) http://rio2016.5ch.net/test/read.cgi/math/1723152147/861
862: 132人目の素数さん [sage] 2025/06/02(月) 12:04:32.64 ID:GMuHFUYr # データ x <- c(-0.86, -0.3, -0.05, 0.73) n <- c(5, 5, 5, 5) y <- c(0, 1, 3, 5) # JAGSモデル model_string <- " model { for (i in 1:N) { y[i] ~ dbin(p[i], n[i]) logit(p[i]) <- beta0 + beta1 * x[i] } # 事前分布(非情報的) beta0 ~ dnorm(0.0, 0.001) beta1 ~ dnorm(0.0, 0.001) # LD50の定義 LD50 <- -beta0 / beta1 } " # JAGSに渡すデータ data_jags <- list( x = x, n = n, y = y, N = length(y) ) # 初期値 inits &
lt;- function() { list(beta0 = rnorm(1, 0, 1), beta1 = rnorm(1, 0, 1)) } # モデル作成と実行 model <- jags.model(textConnection(model_string), data = data_jags, inits = inits, n.chains = 3) update(model, 1000) # バーンイン # サンプリング samples <- coda.samples(model, variable.names = c("beta0", "beta1", "LD50"), n.iter = 10000) # 結果表示(LD50の95%信用区間) summary(samples) LD50_samples <- as.matrix(samples)[, "LD50"] quantile(LD50_samples, probs = c(0.
025, 0.975)) plot(samples) http://rio2016.5ch.net/test/read.cgi/math/1723152147/862
863: 132人目の素数さん [sage] 2025/06/03(火) 00:31:29.12 ID:CfA5PBxZ x = c(-0.86, -0.3, -0.05, 0.73) n = c(5, 5, 5, 5) y = c(0, 1, 3, 5) (data = cbind(x, n, y)) (response = cbind(y, n - y) ) results = glm(response ~ x, family = binomial(link="logit") ) summary(results) b=MASS::mvrnorm(1e5,results$coef,S=vcov(results)) # log(p/(1-p)) = b1+b2*x LD = function(b1,b2,p=0.5) (log(p/(1-p)) - b1)/b2 LD50=mapply(LD,b[,1],b[,2]) hist(LD50) quantile(LD50,c(0.025,0.5,0.975)) HDInterval::hdi(LD50) http://
rio2016.5ch.net/test/read.cgi/math/1723152147/863
864: 132人目の素数さん [sage] 2025/06/03(火) 23:40:25.34 ID:CfA5PBxZ k=10 m=400 Nmax=1000 sim = \(){ N=sample(m:Nmax,1) if(max(sample(1:N,k))==m) return(N) } N=NULL while(length(N)<1000) N=c(N,sim()) summary(N) quantile(N,c(0.5,0.95)) http://rio2016.5ch.net/test/read.cgi/math/1723152147/864
865: 132人目の素数さん [sage] 2025/06/04(水) 07:49:17.80 ID:n75lIIio set.seed(123) solve = function(x,k=1e5){ f = function(x) runif(1,x-0.5,x+0.5) y=replicate(k,mean(sapply(x,f))) quantile(y,p=c(0.025,0.975)) } solve(c(9,10,11,11,12)) set.seed(123) # 再現性のため solve2 <- function(x, k = 1e5) { # 各x_iに対して一様乱数を生成し、平均を計算(ブートストラップ) bootstrap_means <- replicate(k, { x_true <- runif(length(x), x - 0.5, x + 0.5) mean(x_true) }) # 95%信頼区間を計算 quantile(bootstrap_m
eans, probs = c(0.025, 0.975)) } # 実行例 x_rounded <- c(9, 10, 11, 11, 12) solve2(x_rounded) http://rio2016.5ch.net/test/read.cgi/math/1723152147/865
866: 132人目の素数さん [sage] 2025/06/05(木) 13:25:27.68 ID:tGlaBVfa > stancode(fit) // generated with brms 2.22.0 functions { /* compute monotonic effects * Args: * scale: a simplex parameter * i: index to sum over the simplex * Returns: * a scalar between 0 and rows(scale) */ real mo(vector scale, int i) { if (i == 0) { return 0; } else { return rows(scale) * sum(scale[1:i]); } } } data { int<lower=1> N; // total number of observations array[N] int Y; // response variable int<lower
=1> K; // number of population-level effects matrix[N, K] X; // population-level design matrix int<lower=1> Kc; // number of population-level effects after centering int<lower=1> Ksp; // number of special effects terms int<lower=1> Imo; // number of monotonic variables array[Imo] int<lower=1> Jmo; // length of simplexes array[N] int Xmo_1; // monotonic variable vector[Jmo[1]] con_simo_1; // prior concentration of monotonic simplex int prior_only; // should the likelihood
be ignored? } transformed data { matrix[N, Kc] Xc; // centered version of X without an intercept vector[Kc] means_X; // column means of X before centering for (i in 2:K) { means_X[i - 1] = mean(X[, i]); Xc[, i - 1] = X[, i] - means_X[i - 1]; } } parameters { vector[Kc] b; // regression coefficients real Intercept; // temporary intercept for centered predictors simplex[Jmo[1]] simo_1; // monotonic simplex vector[Ksp] bsp; // special effects coefficients } transformed parameters { real lprior = 0; // p
rior contributions to the log posterior lprior += normal_lpdf(b[1] | 0.15, 0.3); lprior += normal_lpdf(b[2] | 0.08, 0.3); lprior += normal_lpdf(b[3] | 0.8, 0.3); lprior += normal_lpdf(b[4] | 0.5, 0.3); lprior += normal_lpdf(Intercept | -4, 2); lprior += dirichlet_lpdf(simo_1 | con_simo_1); lprior += normal_lpdf(bsp[1] | -0.5, 0.3); } model { // likelihood including constants if (!prior_only) { // initialize linear predictor term vector[N] mu = rep_vector(0.0, N); mu += Intercept; for (n in 1:N) { // add mor
e terms to the linear predictor mu[n] += (bsp[1]) * mo(simo_1, Xmo_1[n]); } target += bernoulli_logit_glm_lpmf(Y | Xc, mu, b); } // priors including constants target += lprior; } generated quantities { // actual population-level intercept real b_Intercept = Intercept - dot_product(means_X, b); } http://rio2016.5ch.net/test/read.cgi/math/1723152147/866
867: 132人目の素数さん [sage] 2025/06/06(金) 04:41:04.79 ID:fR053ZqC # ロジスティック回帰モデル ACT = c(16,18,20,22,24,26,28) n = c(2,7,14,26,13,14,3) y = c(0,0,6,12,7,9,3) fit = glm(cbind(y, n - y) ~ ACT, family = binomial()) #predict(fit, newdata = data.frame(ACT = 20)) |> plogis() predict(fit, newdata = data.frame(ACT = 20) ,type="response") cat("\n\n===== se.fit=TRUE =====\n\n") pred1=predict(fit, newdata = data.frame(ACT = 20) ,type="response",se.fit=TRUE) # 信
頼区間(response))結果表示 ci=c(pred1$fit - 1.96*pred1$se.fit,pred1$fit + 1.96*pred1$se.fit) cat("95% 信頼区間: [", ci[1], ",", ci[2], "]\n") # 予測(log-odds とその標準誤差) pred = predict(fit, newdata = data.frame(ACT = 20), se.fit = TRUE) # 信頼区間(log-odds) log_odds = pred$fit se = pred$se.fit lower_log_odds = log_odds - 1.96 * se upper_log_odds = log_odds + 1.96 * se # 確率(ロジスティック関数で変換) prob = plogis(log_odds) lower_prob = plogis(lower_log_odds) uppe
r_prob = plogis(upper_log_odds) # 結果表示 cat("95% 信頼区間: [", lower_prob, ",", upper_prob, "]\n") http://rio2016.5ch.net/test/read.cgi/math/1723152147/867
868: 132人目の素数さん [sage] 2025/06/06(金) 04:43:11.66 ID:fR053ZqC 確率は0~1の範囲に制限されるため、直接計算は境界(0や1)に近い場合に不適切(例: 負の値や1超えの可能性)。また、ロジスティック回帰の標準誤差はログオッズスケールで計算されるため、確率スケールでの線形近似は精度が落ちる。 http://rio2016.5ch.net/test/read.cgi/math/1723152147/868
869: 132人目の素数さん [sage] 2025/06/06(金) 06:49:45.63 ID:rCqJxG6F お前には永遠に確率論なんて無理だよ。 数学Bの統計すらわからないのに。 自分が理解できていないことすら理解できないゴミ http://rio2016.5ch.net/test/read.cgi/math/1723152147/869
870: 132人目の素数さん [sage] 2025/06/06(金) 08:42:32.31 ID:icvPdYuT 公式当てはめてるだけだからな じゃあその公式をどうやって証明するかまでは考えが及ばない やってることは公文式で大学生の演習解いてる中学生と一緒 http://rio2016.5ch.net/test/read.cgi/math/1723152147/870
871: 132人目の素数さん [sage] 2025/06/12(木) 09:46:29.84 ID:z7P0Lqdi Bayesian Computation with RでRのコードが理解できなかった。バグだとおもったのだが、 https://bayesball.github.io/bcwr/corrections.2nd.edition.txt のerrataにも掲載がないのでAIに聞いてみた。 >> 対数ヤコビアン項が間違っていると思う。 # theta=c(log(eta/(1-eta)),log(K)) > LearnBayes::betabinexch function (theta, data) { eta = exp(theta[1])/(1 + exp(theta[1])) K = exp(theta[2]) y = data[, 1] n = data[, 2] N = length(y) logf = funct
ion(y, n, K, eta) lbeta(K * eta + y, K * (1 - eta) + n - y) - lbeta(K * eta, K * (1 - eta)) val = sum(logf(y, n, K, eta)) val = val + theta[2] - 2 * log(1 + exp(theta[2])) return(val) } <bytecode: 0x000001a5a980e758> <environment: namespace:LearnBayes> これが正しいのでは? betabinexch <- function (theta, data) { eta = exp(theta[1])/(1 + exp(theta[1])) K = exp(theta[2]) y = data[, 1] n = data[, 2] N = length(y) logf = function(y, n, K, eta) lbeta(K * eta + y, K * (1 - eta) + n - y) - lbeta(K
* eta, K * (1 - eta)) val = sum(logf(y, n, K, eta)) val = val + theta[1] + theta[2] - 2 * log(1 + exp(theta[1])) # log Jacobian term return(val) } << いずれのAIも >あなたの指摘は正しいです。対数ヤコビアン項に問題があります。 という趣旨の返事が返ってきた。 http://rio2016.5ch.net/test/read.cgi/math/1723152147/871
872: 132人目の素数さん [] 2025/06/12(木) 18:10:24.36 ID:CDb/RdAY >>871 脳内医者完全にバレたのにまだ頑張ってるんだ 哀れだね http://rio2016.5ch.net/test/read.cgi/math/1723152147/872
873: 132人目の素数さん [sage] 2025/06/12(木) 18:20:13.54 ID:QiRqli9X >>871 バグだと思ったのにIssueもあげないの? スレの私的利用といい、マジで他人の成果やタイトルに乗っかるだけの寄生虫じゃん 税金も年金もコイツに使うだけ無駄だよ http://rio2016.5ch.net/test/read.cgi/math/1723152147/873
874: 132人目の素数さん [] 2025/06/12(木) 18:27:36.57 ID:CDb/RdAY >>871に質問! 当然入試でも満点が取れる解答以外認めません ①円周率が3.05より大きいことを証明せよ。 ただし円周率は(円周)/(円の直径)と定義され、円周率が3.14より大きい事は判明していないものとする。 ②√2+√3が無理数であることを証明せよ。 http://rio2016.5ch.net/test/read.cgi/math/1723152147/874
875: 132人目の素数さん [sage] 2025/06/13(金) 07:38:09.90 ID:XjvE6Ide >>874 証明問題解けないんだから、人のした証明が正しいかの判断できないだろ http://rio2016.5ch.net/test/read.cgi/math/1723152147/875
876: 132人目の素数さん [sage] 2025/06/14(土) 05:53:33.29 ID:nWbGzc8A (1)nを正整数とする。 n^3+4n^2+3nを6で割った余りを求めよ。 (2)nを正整数とする。 n^3+7n^2+5nを6で割った余りを求めよ。 http://rio2016.5ch.net/test/read.cgi/math/1723152147/876
メモ帳
(0/65535文字)
上
下
前
次
1-
新
書
関
写
板
覧
索
設
栞
歴
あと 23 レスあります
スレ情報
赤レス抽出
画像レス抽出
歴の未読スレ
AAサムネイル
Google検索
Wikipedia
ぬこの手
ぬこTOP
0.038s